Tag Archives: Programming

Some State of the Art Packages in Julia v1.0

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/some-state-of-the-art-packages-in-julia-v1-0/

In this post I would like to explain some of the state of the art packages in Julia and how they are pushing forward their disciplines. I think the part I would really like to make clear is the ways which Julia gives these packages a competitive advantage and how these features are being utilized to reach their end goals. Maybe this will generate some more ideas!

What are the advantages for Julia in terms of package development?

Using Python/R to use other people’s packages is perfectly fine since then your code will just be as fast as the package code if all your time is in package code calls (I purposely leave MATLAB off this list because its interoperability is much more difficult to use). There is one big exception where using an efficient package can still be slow even when all of the time is spent in package code instead of Python/R: package calls will run into performance issues with any package that requires user-defined function inputs (first-order functions), like differential equation, optimization, etc. libraries. In a recent blog post we showed that Numba+SciPy integrators are still 10x slower than it should be, so it’s not the easy solution some people make it out to be. The reason of course is part of the design: Numba doesn’t do interprocedural optimizations through Fortran codes and when codes are separately compiled like a user input function. These context switching costs particularly hurt when the function calls are smaller, like in callback functions of an optimization modeling language, because it has to hit the Python interpret. so even if you have a faster function call in a fast package you can still have a limiting factor! If you have an asymptotically costly function then this could be mitigated, but that means you do have a lot of limitations on standard use cases.

This is why some Python optimization and differential equation libraries actually take in strings or SymPy expressions input (PyDSTool and JitCODE are two of many examples), since direct Numba/Cython usage for function input will have these problems so they have to write a compiler behind the scenes, which then of course means you have the restrictions that it has to be Float64 without arbitrary numbers and etc. etc. etc. Once you have your own compilation toolchain, you have to explicitly choose what kinds of functions and objects you can support, and so throwing in any arbitrary user codes isn’t necessarily going to work. You can see how this not only lowers features but also adds to development complexity. In fact, this is clearly seen by how PyTorch has a full JIT compiler inside of it along with a full math library! So developing PyTorch was more like developing Julia and the ML library at the same time, instead of just building an ML library. And then it doesn’t get the automatic composability features of Flux.jl so you have to build everything in there yourself… you can see how the package development efficiency difference is quite astronomical.

So how does this translate to packages? First Data Science and ML

Higher order functions, the ability to compose with other packages and use their types: that’s where are competitive advantage is. This fact is reflected in the state-of-the-artness of packages. A lot of data science, statistics, and ML libraries just take a matrix of data in and spit out predictions. In that sense, all of the time is spent in package code and while Julia does have a competitive advantage for building such packages (building a JIT compiler is not a prerequisite for building a neural net library in Julia), it doesn’t have an advantage for using such packages. This is why data science and ML have done just fine in Python, and statistics has done just fine in R. It would be hard to find real Julia gems in these areas since existing code works. The advantage Julia gives is simply that it’s easier to create the package, and so one would expect it to have brand new methods due to someone’s recent research. I’d point to Josh Day’s OnlineStats.jl as an example of using Julia for bleeding edge algorithms.

Now let’s look at Bayesian estimation and MCMC. For this area of data science you do need to take in full models. R and Python have resorted to taking Stan models in the form of a string. Stan is good but let’s admit that this interface is subpar. You’re writing in another language (Stan) and in a string (without syntax highlighting) in order to do Bayesian estimation from a scripting language? Julia has been building libraries which are easier to use due to being native Julia, like Turing.jl and DynamicHMC.jl. These libraries let you insert your likelihood function as just standard Julia code, making it much easier to make complicated models. For example, Stan only lets you use its two differential equation solvers (with a total of 5 options…) and only for ODEs, but Turing.jl and DynamicHMC.jl both can internally utilize DifferentialEquations.jl code for ODEs, SDEs, DAEs, DDEs, etc. That composition done efficiently is something you will not find elsewhere.

And neural net libraries are not as blackbox as most of ML. To get around the issues I mentioned earlier, you cannot just pass functions in so you have to tell the neural net library how to build the entire neural net. This is why PyTorch and Tensorflow are essentially languages embedded into Python which produce objects that represent the functions you want to write, which pass the functions to these packages to compile the right code (and utilize the GPU). Obviously this would be hard to write on your own, but the main disadvantage to users is that normal user code doesn’t work inside of these packages (you cannot just call to arbitrary SciPy code for example. PyTorch can do some of it if you can get its autodiff working, but it cannot autodiff through wrapped C++/Fortran code which is how most packages are written!). But Flux.jl has full composability with Julia code so it’s definitely state-of-the-art and has the flexibility that other neural net libraries are trying to build towards (Tensorflow is rebuilding into Swift in order to get this feature of Flux.jl).

What about Scientific Computing?

But where things tend to be wide open is scientific computing. This domain has to deal with user-input functions and abstract types. So I’d point to not just DifferentialEquations.jl and JuMP which are the clear well-funded front-runners in their discipline, but also a lot of the tooling around this area. IterativeSolvers.jl is a great example of something heavily unique. At face value, IterativeSolvers.jl is a lot like Pysparse’s iterative solvers module or the part of SciPy. Except it’s not when you look at the details. These both require a NumPy matrix. They specifically require a dense/sparse matrix in each documented function. However, one of the big uses of these algorithms is not supposed to be on sparse matrices but on matrix-free representations of `A*x`, but you cannot do this with these Python packages because they require a real matrix. In IterativeSolvers you just pass a type which has `mul!` (`*`) overloaded to be the function call you want and you have a matrix-free iterative solver algorithm. Complex numbers, arbitrary precision, etc. are all supported here which is important for ill-conditioned and quantum physics uses. Using abstract types you can also throw GPUArrays in there and have it just run. This library only keeps getting better.

One of the best scientific computing packages is BandedMatrices.jl. It lets you directly define banded matrices and then overloads things like `*` and `\` to use the right parts of BLAS. Compared to just using a sparse matrix (the standard MATLAB/Python way), this is SCREAMING FAST (the QR factorization difference is pretty big). Banded matrices show up everywhere in scientific computing (pretty much every PDE has a banded matrix operator). If it’s not a banded matrix, then you probably have a block-banded matrix, and thank god that @dlfivefifty also wrote BlockBandedMatrices.jl which will utilize the same speed tricks but expanded to block banded matrices (so, discretizations of systems of PDEs).

In fact, I would say that @dlfivefifty is an unsung hero in the backend of Julia’s scientific computing ecosystem. ApproxFun.jl is another example of something truly unique. While MATLAB has chebfun, ApproxFun is similar but creates lazy matrix-free operators (so you can send these linear operators directly to things like, you guessed it, IterativeSolvers.jl). It uses InfiniteMatrix (InfiniteArrays.jl) types to grow and do whatever sized calculation without allocating the matrices.

While those may look like niche topics if you don’t know about those, those are the types of libraries which people need in order to build packages which need fast linear algebra, which is essentially anything data science or scientific computing. So Julia’s competitive advantage in package building is compounded by the fact that there’s now great unique fundamental numerical tools!

Along the lines of these “fundamental tools” packages is Distributions.jl. It’s quite a boring package: it’s just a bunch of distributions with overloads, but I think Josh Day’s discussion of how you can write a code which is independent of the choice of distribution is truly noteworthy (and if you want a good introduction to Julia video, Josh’s is amazing). If you want to write code to compute quantiles in R or Python, you’d need to use a different function for each possible distribution, or take in a function that computes the pdf (which of course then gives you the problem of function input in these languages!). Package developers in other languages have thus far have just hard coded the distributions they need into things like SciPy, but it’s clear what the scalability of that is.

And then I’d end with JuMP, DifferentialEquations.jl, and LightGraphs.jl as very user-facing scientific computing packages which are in very important domains but don’t really have a comparison library in another scripting language. JuMP‘s advantage is that it allows you to control many things like branch-and-cut algorithms directly from JuMP, something which isn’t possible with “alternatives” like Pyomo (along with building sparse Hessians with autodifferentiation), so if you’re deep into a difficult problem it’s really the only choice. (NOTE: JuMP has not updated to Julia v1.0 yet so you’ll have to wait!). DifferentialEquations.jl is one of the few libraries with high order symplectic integrators, one of the few libraries with integrators for stiff delay differential equations, one of the two high order adaptive stochastic differential equation libraries (with the other being a (pretty good) re-implementation of DiffEq’s SRIW1), one of the only libraries with exponential integrators, one of the only libraries with … I can keep going, and of course it does well in first-order ODE benchmarks. So if you’re serious about solving differential equation based models then in many cases it’s hard to even find an appropriate alternative. And LightGraphs is in a truly different level performance wise than what came before, and it only keeps getting better. Using things like MetaGraphs lets you throw metadata in there as well. Its abstract structure lets you write and use graph algorithms independent of the graph implementation, and lets you swap implementations depending on what would be efficient for your application. So if you’re serious about technical computing, these are three libraries to take seriously.

These packages have allowed for many domain specific tools to be built. Physics is one domain which is doing really well in Julia, with DynamicalSystems.jl (recent winner of the DSWeb2018 prize by SIAM Dynamical Systems!) and QuantumOptics.jl being real top notch packages (I don’t know of a DynamicalSystems.jl comparison in any other language, and I don’t know anything that does all of the quantum stochasticity etc. of QO).

There’s still more to come of course. Compilation tools like PackageCompiler.jl and targetting asm.js with Charlotte.jl is only in its infancy. JuliaDB is shaping up to be a more flexible version of dask, and it’s almost there. We’re missing good parallel linear algebra libraries like a native PETSc, but of course that’s the kind of thing that can never be written in Python (Julia is competing against C++/Fortran/Chapel here). MPIArrays.jl might get there, though with a lot of work.

Conclusion

I hope this is more of a “lead a :horse: to water” kind of post. There are a lot of unique things in Julia, and I hope this explains why certain domains have been able to pull much further ahead than others. Most of the technical computing population is still doing data science on Float64 arrays with blackboxed algorithms (TSne, XGBoost, GLM, etc.) and in that drives a lot of the “immature” talk because what they see in Julia is similar or just a recreation of what exists elsewhere. But in more numerical analysis and scientific computing domains where higher precision matters (“condition number” is common knowledge), functions are input, and there are tons of unique algorithms in the literature that still have no implementation, I would even say that Julia surpassed MATLAB/Python/R awhile ago since it’s hard to find comparable packages to ours in these other languages.

If we want to capture the last flag, we really need to build out the competitive advantage Julia has in the area of data science / ML package development, because until it’s unique it’ll just be “recreations” which don’t draw the biggest user audience. While “programmers” will like the purity of one language and the ability to develop their own code, I think the completeness of statistical R or ML Python matters more to this group. I think a focus on big data via streaming algorithms + out-of-core is a good strategy, but it’s not something that cannot necessarily be done in C++ (dask), so there’s something more that’s required here.

Personally, I think part of the story will be around harnessing Julia’s generic array types for taking advantage of new hardware: new ML packages will not need to directly hardcode for TPUs if there is a TPUArray type that users can give to the package (note the difference here: the developers do not need to be “TPU-aware” in this case, instead the user can mix types together and use automatic code generation to get an efficient TPU-based neural net or other ML algorithm. This “bottom-up” code generation choice via the type system and fully dependent compilation is a place where Julia has
a competitive advantage
since package developers then only need to write a generic implementation of said algorithm and then it can work on any hardware for which someone has implemented a compatible type. As Moore’s law comes to an end and variations on GPUs / FPGAs become required to get a speedup, allowing the user to specify the hardware interaction while still using a standard package will definitely be an advantage!

The post Some State of the Art Packages in Julia v1.0 appeared first on Stochastic Lifestyle.

PuMaS.jl: Pharmaceutical Modeling and Simulation Engine

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/pumas-jl-pharmaceutical-modeling-and-simulation-engine/

Here is an introduction to a pharmaceutical modeling project which I will be releasing in the near future. More details to come.

The post PuMaS.jl: Pharmaceutical Modeling and Simulation Engine appeared first on Stochastic Lifestyle.

Solving Partial Differential Equations with Julia

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/solving-partial-differential-equations-with-julia/

Here is a talk from JuliaCon 2018 where I describe how to use the tooling across the Julia ecosystem to solve partial differential equations (PDEs), and how the different areas of the ecosystem are evolving to give top-notch PDE solver support.

The post Solving Partial Differential Equations with Julia appeared first on Stochastic Lifestyle.