Author Archives: Christopher Rackauckas

Multi-node Parallelism in Julia on an HPC (XSEDE Comet)

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/multi-node-parallelism-in-julia-on-an-hpc/

Today I am going to show you how to parallelize your Julia code over some standard HPC interfaces. First I will go through the steps of parallelizing a simple code, and then running it with single-node parallelism and multi-node parallelism. The compute resources I will be using are the XSEDE (SDSC) Comet computer (using Slurm) and UC Irvine’s HPC (using SGE) to show how to run the code in two of the main job schedulers. You can follow along with the Comet portion by applying and getting a trial allocation.

First we need code

The code I am going to use to demonstrate this is a simple parallel for loop. The problem can be explained as follows: given a 2-dimensional polynomial p(i,j) with coefficients c_{k}, find the area where |p(i,j)|<1. The function for generating these coefficients in my actual code is quite wild, so for your purposes you can replace my coefficients by generating a random array of 36 numbers. Also it's a semi-sparse polynomial of at most degree 8 for each variable, so create arrays of powers powz and powk. We solve for the area by making a fine grid of (i,j), evaluating the polynomial at each location, and summing up the places where its absolute value is less than 1.

I developed this "simple" code for calculating this on my development computer:

julia -p 4

Opens Julia with 4 processes (or you can use addprocs(4) in your code), and then

dx = 1/400
imin = -8
imax = 1
jmin = -3
jmax = 3
coefs,powz,poww = getCoefficients(A0,A1,B0,B1,α,β1234)
@time res = @sync @parallel (+) for i = imin:dx:imax
  tmp = 0
   for j=jmin:dx:jmax
    ans = 0
    @simd for k=1:36
      @inbounds ans += coefs[k]*(i^(powz[k]))*(j^(poww[k]))
    end
    tmp += abs(ans)<1
  end
  tmp
end
res = res*((imax-imin)*(jmax-jmin)/(length(imin:dx:imax)*length(jmin:dx:jmax)))
println(res)

All that we did was loop over the grid $i$ and the grid of $j$, sum the polynomial, check if its absolute value is less than 1, check how many times it is less than 1, and scale by the area we integrated. Some things to note is that I first wrote the code out the "obvious way" and then added the macros to see if they would help. @inbounds turns off array bounds checking. @simd adds vectorization at the "processor" level (i.e. AVX2 to make multiple computations happen at once). @parallel runs the array in parallel and @sync makes the code wait for all processes to finish their part of the computation before moving on from the loop.

Note that this is not the most efficient code. The problem is that we re-use a lot of floating point operations in order to calculate the powers of i and j, so it actually works out much better to unravel the loop:

@time res = @sync @parallel (+) for i = imin:dx:imax
  tmp = 0
  isq2 = i*i; isq3 = i*isq2; isq4 = isq2*isq2; isq5 = i*isq4
  isq6 = isq4*isq2; isq7 = i*isq6; isq8 = isq4*isq4
  @simd for j=jmin:dx:jmax
    jsq2 = j*j; jsq3= j*jsq2; jsq4 = jsq2*jsq2;
    jsq5 = j*jsq4; jsq6 = jsq2*jsq4; jsq7 = j*jsq6; jsq8 = jsq4*jsq4
    @inbounds tmp += abs(coefs[1]*(jsq2) + coefs[2]*(jsq3) + coefs[3]*(jsq4) + coefs[4]*(jsq5) + coefs[5]*jsq6 + coefs[6]*jsq7 + coefs[7]*jsq8 + coefs[8]*(i) + coefs[9]*(i)*(jsq2) + coefs[10]*i*jsq3
    + coefs[11]*(i)*(jsq4) + coefs[12]*i*jsq5 + coefs[13]*(i)*(jsq6) + coefs[14]*i*jsq7 + coefs[15]*(isq2) + coefs[16]*(isq2)*(jsq2) + coefs[17]*isq2*jsq3 + coefs[18]*(isq2)*(jsq4) +
    coefs[19]*isq2*jsq5 + coefs[20]*(isq2)*(jsq6) + coefs[21]*(isq3) + coefs[22]*(isq3)*(jsq2) + coefs[23]*isq3*jsq3 + coefs[24]*(isq3)*(jsq4) + coefs[25]*isq3*jsq5 +
    coefs[26]*(isq4) + coefs[27]*(isq4)*(jsq2) + coefs[28]*isq4*jsq3 + coefs[29]*(isq4)*(jsq4) + coefs[30]*(isq5) + coefs[31]*(isq5)*(jsq2) + coefs[32]*isq5*jsq3+ coefs[33]*(isq6) +
    coefs[34]*(isq6)*(jsq2) + coefs[35]*(isq7) + coefs[36]*(isq8))<1
  end
  tmp
end
res = res*((imax-imin)*(jmax-jmin)/(length(imin:dx:imax)*length(jmin:dx:jmax)))
println(res)

That's the fast version (for the powers I am using), and you can see how there simply are a lot less computations required by doing this. This gives almost a 10x speedup and is the fastest code I came up with. So now we're ready to put this on the HPC.

Setting up Comet

Log into Comet. I use the XSEDE single sign-on hub and gsissh into Comet from there. On Comet, you can download the generic 64-bit binary from here. You can also give compiling it a try yourself, it didn't work out to well for me and the generic binary worked fine. In order to do anything, you need to enter a compute node. To do this we open an interactive job with

srun -pty -p compute -t 01:00:00 -n24 /bin/bash -l

This gives you an interactive job with 24 cores (all on one node). Now we can actually compute things (if you tried before, you would have gotten an error). Now open up Julia by going to the appropriate directory and using

./julia -p 24

This will put you into an interactive session with 24 workers. Now install all your packages, etc., test your code, make sure everything is working.

Running Jobs

Now that Julia is all set up and you tested your code, you need to set up a job script. It's best explained via an example:

#!/bin/bash
#SBATCH -A <account>
#SBATCH --job-name="juliaTest"
#SBATCH --output="juliaTest.%j.%N.out"
#SBATCH --partition=compute
#SBATCH --nodes=8
#SBATCH --export=ALL
#SBATCH --ntasks-per-node=24
#SBATCH -t 01:00:00
export SLURM_NODEFILE=`generate_pbs_nodefile`
./julia --machinefile $SLURM_NODEFILE /home/crackauc/test.jl

In the first line you put the account name you were given when you signed up for Comet. This is the account whose time will be billed. Then you give your job a name and tell it where to save the output. Next is the partition that you wish to run on. Here I specify for it to be the compute node. Now I tell it the number of nodes. I wanted 8 nodes with 24 tasks per node. The time limit is 1 hour.

8 nodes? So we need to add some MPI code, right? NOPE! Julia does it for you! This is the amazing part. What we do is we export the Slurm node file and give it to Julia as the machinefile. This nodefile is simply a list of the compute nodes that are allocated to our job that is generated by the job manager. This will automatically open up one worker process for each thing in the node file (which will be 24 tasks per node, so there are 8*24= 192 lines in the node file and Julia will open up 192 processes) and run test.jl.

Did it work? A quick check is to have the following code in test.jl:

hosts = @parallel for i=1:192
       println(run(`hostname`))
       end

You should see it printout the names of 8 different hosts, each 24 times. This means our parallel loop automatically is on 192 cores!

Now add the code we made before to test.jl. When we run this in different settings, I get the following results: (SIMD is the first version, Full is the second "full unraveled" version)

8 Nodes:
SIMD - .74 seconds
Full - .324 seconds

4 Nodes:
SIMD - .77 seconds
Full - .21 seconds

2 Nodes:
SIMD - .81 seconds
Full - .21 seconds

1 Node
SIMD - .996 seconds
Full - .273 seconds

In this case we see that the optimal is to use 2 nodes. As you increase the domain you are integrating over / decrease dx, you have to perform more computations which makes using more nodes more profitable. You can check yourself that if you use some extreme values like 1/1600 it may be better to use 4-8 nodes. But this ends our quick tutorial into multi-node parallelism on Comet. You're done. If you have ever programmed with MPI before, this is just beautiful.

What about SGE?

Another popular job scheduler is SGE. They have this on my UC Irvine cluster. To use Julia with multi-nodes on this, you do pretty much the same thing. Here you have to tell it you want to do an MPI job. This will create the machine file and you stick that machine file into Julia. The job script is then as follows:

#!/bin/bash
 
#$ -N jbtest
#$ -q math
#$ -pe mpich 128
#$ -cwd            		# run the job out of the current directory
#$ -m beas
#$ -ckpt blcr
#$ -o output/
#$ -e output/
module load julia/0.4.3
julia --machinefile jbtest-pe_hostfile_mpich.$JOB_ID test.jl

That's it. SGE makes the machine file called jbtest-pe_hostfile_mpich.$JOB_ID which has the 128 processes to run (this is in the math queue, so for me it's split across 5 computers with the nodes repeated for the number of processes on that node), in your current directory, so I just load the Julia module and stick it into Julia as the machine file, test it out, and it prints out the name of compute nodes. On this computer I can even ssh into the nodes while a job is running to run htop. In htop I see that it uses as many cores on each computer that it says its using (sometimes a little bit more when it's using threading. You may need to set the number of BLAS threads to 0 if you are doing any linear algebra and don't want it to bleed). Success!

What about GPUs?

Using GPUs on this same problem is discussed in my next blog post!

The post Multi-node Parallelism in Julia on an HPC (XSEDE Comet) appeared first on Stochastic Lifestyle.

Using Julia’s C Interface to Utilize C Libraries

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/using-julias-c-interface-utilize-c-libraries/

I recently ran into the problem that Julias’s (GNU Scientific Library) GSL.jl library is too new, i.e. some portions don’t work as of early 2016. However, to solve my problem I needed access to adaptive Monte Carlo integration methods. This means it was time to go in depth in Julia’s C interface. I will go step by step on how I created and compiled the C code, and called it from Julia.

What I wished to use was the GSL Vegas functions. Luckily, there is a good example on this page for how to use the library. I started with this code. I then modified it a bit. To start, I changed the main function header to:

int monte_carlo_integrate (double (*integrand)(double *,size_t,void *),double* retRes,double* retErr,int mode,int dim,double* xl,double* xu,size_t calls){

Before it took in no values, but I will want to be able to do most things from Julia. First of all, I changed the function name from main to avoid namespace issues. This is just good practice when working with shared libraries. then I added a bunch of arguments. The first argument is a function pointer, where the function is defined just as g is in the example. Then I pass in pointers which we become the return values. I add mode (1,2,3 are the different integration types), dim, xl, xu, and calls as passed in values to easily extend the functionality. The rest of the C-code stays mostly the same: I change the parts which reference g to integrand (I like the function name better) and comment out the print functions (they tend to no print correctly). So I end up with a C-function as follows:

#include <stdlib.h>
#include <gsl/gsl_math.h>
#include <gsl/gsl_monte.h>
#include <gsl/gsl_monte_plain.h>
#include <gsl/gsl_monte_miser.h>
#include <gsl/gsl_monte_vegas.h>
int monte_carlo_integrate (double (*integrand)(double *,size_t,void *),double* retRes,double* retErr,int mode,int dim,double* xl,double* xu,size_t calls){
  double res, err;
  const gsl_rng_type *T;
  gsl_rng *r;
  gsl_monte_function G = { *integrand, dim, 0 };
  gsl_rng_env_setup ();
  T = gsl_rng_default;
  r = gsl_rng_alloc (T);
  if (mode==1){
    gsl_monte_plain_state *s = gsl_monte_plain_alloc (dim);
    gsl_monte_plain_integrate (&G, xl, xu, dim, calls, r, s,
                               &res, &err);
    gsl_monte_plain_free (s);
    /* display_results ("plain", res, err); */
  }
  if (mode==2){
    gsl_monte_miser_state *s = gsl_monte_miser_alloc (dim);
    gsl_monte_miser_integrate (&G, xl, xu, dim, calls, r, s,
                               &res, &err);
    gsl_monte_miser_free (s);
    /* display_results ("miser", res, err); */
  }
  if (mode==3){
    gsl_monte_vegas_state *s = gsl_monte_vegas_alloc (dim);
    gsl_monte_vegas_integrate (&G, xl, xu, dim, 10000, r, s,
                               &res, &err);
    /* display_results ("vegas warm-up", res, err); */
    /* printf ("converging...n"); */
    do
      {
        gsl_monte_vegas_integrate (&G, xl, xu, dim, calls/5, r, s,
                                   &res, &err);
        /* printf ("result = % .6f sigma = % .6f "
                "chisq/dof = %.1fn", res, err, gsl_monte_vegas_chisq (s)); */
      }
    while (fabs (gsl_monte_vegas_chisq (s) - 1.0) > 0.5);
    /* display_results ("vegas final", res, err); */
    gsl_monte_vegas_free (s);
  }
  gsl_rng_free (r);
  *retRes = res;
  *retErr = err;
  return 0;
}

Notice how nothing was specifically changed to be “Julia-style”, this is all normal C-code. So how do I use it? First I need to compile it to a shared library. This means I use the -shared tag and save it to a .so. I will need the GSL libraries loaded when compiling, so I add the -lgsl, -lgslcblas, and -lm (math) tags to link those libraries. Then I add the -fPIC tag since Julia’s documentation says so, and tell it the file to compile. The complete code is:

gcc -Wall -shared -o libMonte.so -lgsl -lgslcblas -lm -fPIC monteCarloExample.c

This will give you a .so file. Now we need to use it. I will describe passing in the function last. Here’s the setup. The first of the other variables are the two pointers retRes and retErr where the results will be saved. In Julia, to get a pointer, I make two 1-element arrays as follows:

x = Array{Float64}(1)
y = Array{Float64}(1)

The only other peculiar thing I had to do was to change pi from Julia’s irrational type to a Float64 (Cdouble) and use that to make the array. So for the other variables I used:

fpi = convert(Float64,pi)
xl = [0.; 0.; 0.]
xu = [fpi; fpi; fpi]
calls = 500000
mode = 3
dim = 3

Now I pass this all to C as follows:

ccall((:monte_carlo_integrate,"/path/to/library/libMonte.so"),Int32,(Ptr{Void},Ptr{Cdouble},Ptr{Cdouble},Int32,Int32,Ptr{Cdouble},Ptr{Cdouble},Csize_t),integrand_c,x,y,mode,dim,xl,xu,calls)

The first argument a tuple where the first place is the symbol which says which function in the library to use and the second place is the library name or the path to the library. The second argument is the return type (here it’s Int32 because our function returns int 0 when complete). The next portion is a tuple which defined the types of the arguments we are passing. The first is Ptr{Void} which is used for things like function pointers. Next there are two Cdouble pointers for retRes and retErr. Then two integers, two more pointers, and a Csize_t. Lastly we put in all of the variables we want to pass in.

Recall that the result was stored into the pointers for the second and third variable passed in. These are the pointers x and y. So to get the values of res and err, I de-reference them:

res=x[1]
err=y[1]

Now res and err hold the values for the solution and the error.

I am not done yet since I didn’t talk about the function! To do this, we define the function in Julia. We have to use parametric types or Julia will yet at us, and we have to ensure that the returned value is a Cdouble (in our case). So we re-write the g function in Julia as:

function integrand{T,T2,T3}(x::T,dim::T2,params::T3)
  A = 1.0 / (pi * pi * pi)
  return A / (1.0 - cos(unsafe_load(x,1))*cos(unsafe_load(x,2))*cos(unsafe_load(x,3)))::Cdouble
end

Notice that instead of x[1], we have to use the Julia function unsafe_load(x,1) to de-reference a pointer. However, since this part is in Julia, other things are much safer, like how we can use pi directly without having to convert it to a float. Also notice that we can add print statements in this function, and they will print directly to Julia. You can use this to modify the display_results function to be a Julia function which prints. However, this itself is still not able to be passed into C. To do that, you have to translate it to a C function:

integrand_c = cfunction(integrand,Cdouble,(Ptr{Cdouble},Ptr{Cdouble},Ptr{Cdouble}))

Here we used the Julia cfunction function where the first argument is the function, the second argument is what the function returns, and the third argument is a tuple of C-types that the function will take on. If you look back at the ccall, this integrand_c is what we passed as the first argument to the actual C-function.

Check to see that this all works. It worked for me. For completeness I will put the full Julia code below. Happy programming!

x = Array{Float64}(1)
y = Array{Float64}(1)
fpi = convert(Float64,pi)
xl = [0.; 0.; 0.]
xu = [fpi; fpi; fpi]
 
function integrand{T,T2,T3}(x::T,dim::T2,params::T3)
  A = 1.0 / (pi * pi * pi)
  return 3*A / (1.0 - cos(unsafe_load(x,1))*cos(unsafe_load(x,2))*cos(unsafe_load(x,3)))::Cdouble
end
 
integrand_c = cfunction(integrand,Cdouble,(Ptr{Cdouble},Ptr{Cdouble},Ptr{Cdouble}))
 
calls = 500000
mode = 3
dim = 3
ccall((:monte_carlo_integrate,"/home/crackauc/Public/libMonte.so"),Int32,(Ptr{Void},Ptr{Cdouble},Ptr{Cdouble},Int32,Int32,Ptr{Cdouble},Ptr{Cdouble},Csize_t),integrand_c,x,y,mode,dim,xl,xu,calls)
res=x[1]
err=y[1]

The post Using Julia’s C Interface to Utilize C Libraries appeared first on Stochastic Lifestyle.

Julia iFEM3: Solving the Poisson Equation

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/julia-ifem3/

This is the third part in the series for building a finite element method solver in Julia. Last time we used our mesh generation tools to assemble the stiffness matrix. The details for what will be outlined here can be found in this document. I do not want to dwell too much on the actual code details since they are quite nicely spelled out there, so instead I will focus on the porting of the code. The full code will be available soon on a github repository, and so since most of it follows a straight translation from the linked document, I’ll leave it out of the post for you to find on the github.

The Ups, Downs, and Remedies to Math in Julia

At this point I have been coding in Julia for over a week and have been loving it. I come into each new function knowing that if I just change the array dereferencing from () to [] and wrap vec() calls around vectors being used as indexes (and maybe int()), I am about 95% done with porting a function. Then I usually play with cosmetic details. There are a few little details which make code in Julia a lot prettier. For example, for the FEM solver, we need to specify a function. In MATLAB, specifying such the function for which we want to solve -Delta u = f in-line would be done via anonymous functions like:

f = @(x)sin(2*pi.*x(:,1)).*cos(2*pi.*x(:,2));

I tend to have two problems with that code. For one, it’s not math, it’s programming, and so glancing at my equations and glancing at my code has a little bit of a translation step where errors can happen. Secondly, in many cases anonymous functions incur a huge performance decrease. This fact and the fact that MATLAB’s metaprogramming is restricted to string manipulation and eval destroyed a project I had tried a few years ago (general reaction-diffusion solver with a GUI, the GUI took in the reaction equations, but using anonymous functions killed the performance to where it was useless). However, in Julia functions can be defined inline and be first class, and have a nice appearance. For example, the same function in Julia is:

f(x) = sin(2π.*x[:,1]).*cos(2π.*x[:,2]);

Two major improvements here. For one, since the interpreter knows that variables cannot start with a number, it interprets 2π as the mathematical constant 2π. Secondly, yes, that’s a π! Julia uses allows unicode to be entered (In Juno you enter the latex pi and hit tab), and some of them are set to their appropriate mathematical constants. This is only cosmetic, but in the long-run I can see this as really beneficial for checking the implementation of equations.

However, not everything is rosy in Julia land. For one, many packages, including the FEM package I am working with, are in MATLAB. Luckily, the interfacing via MATLAB.jl tends to work really well. In the first post I showed how to do simple function calls. This did not work for what I needed to do since these function calls don’t know how to pass a function handle. However, digging into MATLAB.jl’s package I found out that I could do the following:

  put_variable(get_default_msession(),:node,node)
  put_variable(get_default_msession(),:elem,elem)
  put_variable(get_default_msession(),:u,u)
  eval_string(get_default_msession(),"sol = @(x)sin(2*pi.*x(:,1)).*cos(2*pi.*x(:,2))/(8*pi*pi);")
  eval_string(get_default_msession(),"Du = @(x)([cos(2*pi.*x(:,1)).*cos(2*pi.*x(:,2))./(4*pi) -sin(2*pi.*x(:,1)).*sin(2*pi.*x(:,2))./(4*pi)]);")
 
  eval_string(get_default_msession(),"h1 = getH1error(node,elem,Du,u);");
  eval_string(get_default_msession(),"l2 = getL2error(node,elem,sol,u);");
  h1 = jscalar(get_mvariable(get_default_msession(),:h1));
  l2 = jscalar(get_mvariable(get_default_msession(),:l2));

Here I send the variables node, elem, and u to MATLAB. Then I directly evaluate strings in MATLAB to make function handles. With all of the variables appropriately in MATLAB, I call the function getH1error and save its value (in MATLAB). I then use the get_mvariable to bring the result into MATLAB. That value is a value of MATLAB type, and so I use the MATLAB.jl’s conversion function jscalar to then get the scalar result h1. As you can see, using MATLAB.jl in this fashion is general enough to do any of your linking needs.

For very specialized packages, this is good. For testing the ported code for correctness, this is great. However, I hope not to do this in general. Sadly, every once in awhile I run into a missing function. In this example, I needed accumarray. It seems I am not the only MATLAB exile as once again Julia implementations are readily available. The lead Julia developer Stefan has a general answer:

function accumarray2(subs, val, fun=sum, fillval=0; sz=maximum(subs,1), issparse=false)
   counts = Dict()
   for i = 1:size(subs,1)
        counts[subs[i,:]]=[get(counts,subs[i,:],[]);val[i...]]
   end 
   A = fillval*ones(sz...) 
   for j = keys(counts)
        A[j...] = fun(counts[j])
   end
   issparse ? sparse(A) : A
end
  0.496260 seconds (2.94 M allocations: 123.156 MB, 8.01% gc time)
  0.536521 seconds (2.94 M allocations: 123.156 MB, 8.83% gc time)
  0.527007 seconds (2.94 M allocations: 123.156 MB, 9.41% gc time)
  0.544096 seconds (2.94 M allocations: 123.156 MB, 9.76% gc time)
  0.526110 seconds (2.94 M allocations: 123.156 MB, 12.22% gc time)

whereas Tim answer has less options but achieves better performance:

function accumarray(subs, val, sz=(maximum(subs),)) 
    A = zeros(eltype(val), sz...) 
    for i = 1:length(val) 
        @inbounds A[subs[i]] += val[i] 
    end 
    A 
end

Timings

  0.000355 seconds (10 allocations: 548.813 KB)
  0.000256 seconds (10 allocations: 548.813 KB)
  0.000556 seconds (10 allocations: 548.813 KB)
  0.000529 seconds (10 allocations: 548.813 KB)
  0.000536 seconds (10 allocations: 548.813 KB)
  0.000379 seconds (10 allocations: 548.813 KB)

Why is Julia “Missing” So Many Functions? And How Do We Fix It?

This is not the first MATLAB function I found myself missing. Off the top of my head I know I had to get/make versions of meshgrid, dot(var,dimension) just in the last week. To the dismay of many MATLAB exiles, many of the Julia developers are against “cluttering the base” with these types of functions. While it is easy to implement these routines yourself, many of these routines are simple and repeated by mathematical programmers around the world. By setting a standard name and implementation to the function, it helps code-reusability and interpretability.

However, the developers do make a good point that there is no reason for these functions to be in the Base. Julia’s Base is for functions that are required for general use and should be kept small in order to make it easier for the developers to focus on the core functionality and limit the resources required for a standard Julia install. This will increase the number of places where Julia could be used/adopted, and will help ensure the namespace isn’t too full (i.e. you’re not stepping on too many pre-made functions).

But mathematicians need these functions. That is why I will be starting a Julia Extended Mathematical Package. This package will be to hold the functions that are not essential language functions, but “essential math language” functions like you’d find in the base of MATLAB, R, numpy/scipy, etc., or even just really useful routines for mathematical programming. I plan on cleaning up the MATLAB implementations I have found/made ASAP and putting this package up on github for others to contribute to. My hope is to have a pretty strong package that contains the helper functions you’d expect to have in a mathematical programming language. Then just by typing using ExtendedMath, you will have access to all the special mathematical functionality you’re used to.

Conclusion

As of now I have a working FEM solver in Julia for Poisson’s equation with mixed Neumann and Dirichlet boundary conditions. This code has been tested for convergence and accuracy and is successful. However, this code has some calls to MATLAB. I hope to clean this up and after this portion is autonomous, I will open up the repository. The next stage will be to add more solvers: more equations, adaptive solvers, etc., as I go through the course. As, as mentioned before, I will be refactoring out the standard mathematical routines and putting that to a different library which I hope to get running ASAP for others to start contributing to. Stay tuned!

The post Julia iFEM3: Solving the Poisson Equation appeared first on Stochastic Lifestyle.