Author Archives: Julia Computing, Inc.

Julia at NeurIPS and the Future of Machine Learning Tools

We are excited to share several research papers on the Julia and Flux machine learning ecosystem, to be presented at the NeurIPS Systems for ML Workshop. Since initially proposing the need for a first-class language and ecosystem for machine learning (ML), we have made considerable progress, including the ability to take gradients of arbitrary computations by leveraging Julia’s compiler, and compiling the resulting programs to specialized hardware such as Google’s Tensor Processing Units.

Here we talk about these papers and the projects that have brought these to life, namely: Flux.jl [paper], Zygote.jl [paper] and XLA.jl [paper].

Flux.jl is a library that gives a fresh take on machine learning as it exposes powerful tools to the user in a non-intrusive manner while remaining completely hackable, right to its core.

“Careful design of the underlying automatic differentiation allows freely mixing mathematical expressions, built-in and custom layers and algorithms with control flow in one model. This makes Flux unusually easy to extend to new problems.”



Flux plays nicely with the entire Julia ecosystem, leveraging Julia’s multiple dispatch to make sharing types and data between Flux and many widely used array types transparent (eg. CuArrays for effortless translation of models and data to the GPU). It even lets users extend Julia’s compiler and write custom GPU kernels within the same program.

In the Flux paper, we demonstrate the ease with which one is able to take advantage of the underlying ecosystem to express ideas and complicated thoughts. One example is how Flux models can be learned with custom training loops that can house arbitrary logic, including more complex gradient flows than a typical machine learning framework might support.

for x, c, d in training_set
    c_hat, d_hat = model(x)
    c_loss = loss(c_hat, y) + λ*loss(d_hat, 1 - d)
    d_loss = loss(d_hat, d)
    back!(c_loss)
    back!(d_loss)
    opt()
end

Flux.jl has been shown to run on par with contemporary deep learning libraries while being dramatically simpler, providing intelligent abstractions and maintaining a minimalist API.

Calculating derivatives is a recurrent and intensive task while training any large model, and compiler level optimisations for differentiable code have seen a recent surge in interest. Automatic Differentiation, a topic of much interest in the current ML landscape, can be used almost transparently when hooked into the language compiler.

Zygote.jl is one such example of doing source-to-source transformations of the Static Single Assignment (SSA) form, taking advantage of many of the recent improvements made to the base Julia compiler. Similar efforts such as Capstan.jl showcase an alternative application of these same compiler primitives toward automatic differentiation for different applications.



Zygote transparently generates adjoint code for arbitrary Julia functions, sacrificing neither speed nor the dynamism of the full Julia language. It interacts directly with Julia’s existing compiler and utilizes its full set of optimisation heuristics. It exposes a familiar interface, making usage extremely simple, as shown by the following example:

>>> @code_llvm derivative(x -> 5x+3, 1)
define i64 @"julia_#625_38792"(i64)
{ top:
    ret i64 5
}

It enables reverse mode AD while preserving existing language semantics. The Zygote paper also presents some benchmarks for simple functions against contemporary methods.

“It opens up the opportunity for robust traditional compiler techniques to be extended to machine learning, enabling kernel fusion or compilation for accelerators with no artificial limitations on the kinds of models that researchers can express.
This combination has not previously been possible in a high-level, general-purpose programming language.”

XLA.jl, released recently shows the ability to repurpose the Julia compiler to target Google’s TPUs.



This package combines the simple and elegant Flux models, applies Zygote’s AD and offloads the entire forward and backward pass onto the TPU for the utmost speed, bringing the entire story full circle. The XLA paper details its methodology, using Google’s latest XRT API to compile Julia code to XLA IR. It explains how the forward and backward passes are generated, as well as handling things such as control flow and compiling dynamic Julia code down to static sub-segments for execution on the TPU.

“Targeting TPUs using our compiler, we are able to evaluate the VGG19 forward pass on a batch of 100 images in 0.23s”

XLA.jl is written in under 1000 lines of code, a truly impressive feat considering the opportunities it opens up. It also shines a light on the language’s expressive power.

# An HLO operand that generates a random
# uniform random number of the specificed
# shape and element type:
struct HloRng <: HloOp{:rng}
    Type
    Shape
end

"""A function that adds random numbers to
each entry of a 1000x1000 matrix"""
@eval function add_rand_1000x1000(
        A::XRTArray{Float32, (1000, 1000), 2}
        random = $(HloRng(Float32,(1000, 1000)))()
    result = $(HloAdd())(random, A)
    return result
end

Google cloud TPUs provide an efficient, extremely high-performance computational platform able to dramatically speed up the demanding task of training models. From the BFloat16s.jl package which allows prototyping of algorithms on CPUs to check algorithmic stability with the restricted precision available within TPUs, to the internal compiler and related ML ecosystem, Julia supports a dynamic, familiar and high-performance environment for taking advantage of this special hardware. The progress made within the past few months and the recognition received have us very excited about the future of machine learning in the Julia ecosystem and the world at large.

Creating Interactive Online Flux.jl Demos

Neethu Mariya Joy is pursuing a BE(Hons.) in Computer Engineering at Birla Institute of Technology and Science in Pilani.

Neethu explains how she became interested in Julia and how she took the initiative to participate in the Google Summer of Code program:

“My introduction to programming was at school. I remember coming early to school with friends to figure out how some code written in BASIC works by trying out all sorts of variations at our computer lab. Back then, it was just something that was fun to play with.

“I was excited about getting a chance to study Computer Engineering at BITS Pilani. I joined a few technical clubs which helped me accelerate my learning process. I heard about Julia from a member of one of those clubs. When I first decided to give it a try, I tried converting some old node.js code into Julia. The code ran faster, but even more important, the code was much shorter, easier to read and easier to debug.

“I heard about the Google Summer of Code program from my seniors at college. When I learned that Julia was participating through NumFOCUS, I reached out to Julia Computing’s Mike Innes about my interest in contributing. Mike told me about the FluxJS demos project and I was really excited to work on it because it seemed fun and creative. Also, I got a chance to learn about the various models that could be implemented.

“My project was to create interactive demos for Flux.jl on the web. The challenging part was to get all the details of the demos right, especially while making an AlphaGo demo. I added features to FluxJS.jl as well. Currently, fluxml.ai contains five demos which were developed during the summer.

“Converting Flux.jl models with custom layers into tensorflow.js code can be done with a few primitives since julia lets you create a graph of the function calls used while executing a function by examining its source. For example, different layers that use the same basic function calls, say add and multiply inside them, can be converted to javascript using a set of core primitives instead of seperate primitives for each layer.

“These demos can be used to teach new users some of the potential applications of Flux. Each demo accompanies a link to the code. Having an interactive demo that can run on any browser can help new users understand the underlying models well.

“Going forward, I will continue working on more demos and create better ways to learn Flux.”

Google Cloud TPUs Now Speak Julia

Did you know that Google Cloud TPUs Now Speak Julia?

Google.ai lead Jeff Dean took note:

More information is available in Automatic Full Compilation of Julia Programs and ML Models to Cloud TPUs by Julia Computing co-founder and CTO Keno Fischer and Senior Research Engineer Elliot Saba.

In other Julia news:

Julia Computing is participating in upcoming Julia meetups including:

  • Cambridge, MA: Julia SPLASH Meetup with Viral Shah and Jameson Nash (Julia Computing), Cambridge Area Julia Users Network (CAJUN) and Association for Computing Machinery (ACM) Special Interest Group on Programming Languages (SIGPLAN) Systems, Programming, Languages and Applications: Software for Humanity (SPLASH) at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) Nov 7
  • London: Julia and Big Data – Aviva, Spark, Hadoop, Hive with Malcolm Sherrington (AMIS Consulting), Avik Sengupta (Julia Computing) and London Julia User Group at Microsoft Reactor Nov 12
  • Budapest: Bemutatjuk a Juliát with Avik Sengupta (Julia Computing) and Budapest Julia User Group at Cursor Insight Nov 21

For a more complete up-to-date list of events featuring Julia, please visit https://juliacomputing.com/events/

Julia Computing announced a Call for Proposals for Diversity and Inclusion grants up to $4,000, generously funded by the Alfred P. Sloan Foundation. If you have an idea to increase diversity and inclusion in the Julia community, we want to hear from you.

The new JuliaPro has been completely re-architected with faster, easier downloads and fewer bugs and glitches. Click here to learn more.

JuliaBox also benefited from a major upgrade.
Click here to learn how you can get more power,
memory, speed, disk space, RAM, priority support, an autograder for
classes and more for as little as $7 per month for academic users or
$14 per month for non-academic users.

JuliaTeam is an
enterprise solution that works seamlessly behind your organization’s
firewall, resolves proxy server issues and facilitates package
development, installation, management and control.

Julia training is available
online and offline from Julia Computing, including Intro to Julia,
Machine Learning and Artificial Intelligence, and customized courses at
every level. Registration is available
here
. Live instructor-led online
courses include:

Julia and Julia Computing in the News

  • Analytics
    India
    :
    Julia Users Can Now Rejoice, Google Cloud Has Powerful Capabilities
    to Support the Language
  • Analytics
    India
    :
    6 Tech Giants Who Are Building Robust AI Developer Ecosystems in
    India
  • Synced:
    Google Cloud TPUs Now Speak Julia
  • Arxiv: Automatic Full
    Compilation of Julia Programs and ML Models to Cloud TPUs
  • Packt:
    Julia for Machine Learning. Will the New Language Pick Up Pace?
  • Nature: Why
    Jupyter Is Data Scientists’ Computational Notebook of Choice
  • EconomicTimes:
    All About ‘Old’ IT Skills: Why Indian Techies Don’t Need to Panic
    Just Yet
  • Datanami:
    Jane Herriman, Director of Diversity and Outreach for Julia
    Computing, Joins NumFOCUS Board of Directors
  • MarkTechPost:
    Programming Languages for Machine Learning: Julia
  • Times of
    India
    :
    India Runs on Red Hat, the Company IBM Is Buying for $34 Billion
  • TechRepublic:
    急成長中のプログラミング言語「Julia」–MIT教授に聞く人気の理由と今後の展開
  • Technotification:
    10 Easy Programming Languages To Learn For Beginners
  • ZDNet:
    Apache Spark Creators Set Out to Standardize Distributed Machine
    Learning Training, Execution, and Deployment
  • HPCWire:
    The Linux Foundation Awards 31 Open Source Training Scholarships
  • Tactical Business:
    Numerical Analysis Software Used Extensively in Fields such as
    Computing and Engineering Pushes Growth
  • Irish Tech News:
    The Linux Foundation Announces 31 Open Source Training Scholarship
    Awardees
  • TechSina:
    他创造了集众多编程语言优势于一身的Julia
  • Root:
    Proč Incanter Vznikl a z Jakého Důvodu By Nás Mohl Zajímat?
  • Forbes:
    Ready For A Career Makeover? These Skills Will Land You A New Collar
    Job
  • MarkTechPost:
    Do You Want to Become a Machine Learning Engineer?

Julia Blog Posts

Upcoming Julia Events

Recent Julia Events

Julia Meetup Groups: There are 37 Julia Meetup groups worldwide with 8,584 members. If there’s a Julia Meetup group in your area, we hope you will consider joining, participating and helping to organize events. If there isn’t, we hope you will consider starting one.

Julia Jobs, Fellowships and Internships

Do you work at or know of an organization looking to hire Julia programmers as staff, research fellows or interns? Would your employer be interested in hiring interns to work on open source packages that are useful to their business? Help us connect members of our community to great opportunities by sending us an email, and we’ll get the word out.

There are more than 170 Julia jobs currently listed on Indeed.com, including jobs at Accenture, BlackRock, Boeing, Conning, Dow Jones, CBRE, McKinsey, Huawei, Instacart, IBM, NBCUniversal, comScore, Disney, Gallup, Mathematica, Zillow, Facebook, National Renewable Energy Research Laboratory, Los Alamos National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Sandia National Laboratory, Columbia University, Rutgers University, University of Illinois – Chicago, University of South Florida, Washington State University and Brown University.

Contact Us: Please contact us if
you wish to:

  • Purchase or obtain license information for Julia products such as
    JuliaTeam, JuliaPro, JuliaRun, JuliaFin or JuliaBox
  • Obtain pricing for Julia consulting projects for your organization
  • Schedule Julia training for your organization
  • Share information about exciting new Julia case studies or use cases
  • Spread the word about an upcoming conference, workshop, training,
    hackathon, meetup, talk or presentation involving Julia
  • Partner with Julia Computing to organize a Julia meetup, conference,
    workshop, training, hackathon, talk or presentation involving Julia
  • Submit a Julia internship, fellowship or job posting

About Julia and Julia Computing

Julia is the fastest high performance open source computing language for data, analytics, algorithmic trading, machine learning, artificial intelligence, and other scientific and numeric computing applications. Julia solves the two language problem by combining the ease of use of Python and R with the speed of C++. Julia provides parallel computing capabilities out of the box and unlimited scalability with minimal effort. For example, Julia has run at petascale on 650,000 cores with 1.3 million threads to analyze over 56 terabytes of data using Cori, one of the ten largest and most powerful supercomputers in the world. With more than 2 million downloads, 1,900 registered packages, 41 thousand GitHub stars and +101% annual download growth, Julia is one of the top programming languages developed on GitHub. Julia adoption is growing rapidly in finance, insurance, machine learning, energy, robotics, genomics, aerospace, medicine and many other fields.

Julia Computing was founded in 2015 by all the creators of Julia to develop products and provide professional services to businesses and researchers using Julia.

To learn more, please visit the Case Studies section on the Julia Computing Website.

Julia users, partners and employers hiring Julia programmers in 2018 include Amazon, Apple, BlackRock, Booz Allen Hamilton, Capital One, Comcast, Disney, Ernst & Young, Facebook, Ford, Google, IBM, Intel, KPMG, Microsoft, NASA, Netflix, Oracle, PwC, Uber, and many more.