Julia Computing Raises $4.6M in Seed Funding

By: Julia Computing, Inc.

Re-posted from: http://juliacomputing.com/press/2017/06/19/funding.html

Berkeley, California – Julia Computing is pleased to announce seed funding of $4.6M from investors General Catalyst and Founder Collective.

Julia Computing CEO Viral Shah says, “We selected General Catalyst and Founder Collective as our initial investors because of their success backing entrepreneurs with business models based on open source software. This investment helps us accelerate product development and continue delivering outstanding support to our customers, while the entire Julia community benefits from Julia Computing’s contributions to the Julia open source programming language.”

The General Catalyst team was led by Donald Fischer, who was an early product manager for Red Hat Enterprise Linux, and the Founder Collective team was led by David Frankel.

Julia is the fastest modern high performance open source computing language for data, analytics, algorithmic trading, machine learning and artificial intelligence. Julia combines the functionality and ease of use of Python, R, Matlab, SAS and Stata with the speed of C++ and Java. Julia delivers dramatic improvements in simplicity, speed, capacity and productivity. Julia provides parallel computing capabilities out of the box and unlimited scalability with minimal effort. With more than 1 million downloads and +161% annual growth, Julia is one of the top 10 programming languages developed on GitHub and adoption is growing rapidly in finance, insurance, energy, robotics, genomics, aerospace and many other fields.

According to Tim Thornham, Director of Financial Solutions Modeling at Aviva, Britain’s second-largest insurer, “Solvency II compliant models in Julia are 1,000x faster than Algorithmics, use 93% fewer lines of code and took one-tenth the time to implement.”

Julia users, partners and employers hiring Julia programmers in 2017 include Amazon, Apple, BlackRock, Capital One, Comcast, Disney, Facebook, Ford, Google, Grindr, IBM, Intel, KPMG, Microsoft, NASA, Oracle, PwC, Raytheon and Uber.

  1. Julia is lightning fast. Julia provides speed improvements up to
    1,000x for insurance model estimation, 225x for parallel
    supercomputing image analysis and 11x for macroeconomic modeling.

  2. Julia is easy to learn. Julia’s flexible syntax is familiar and
    comfortable for users of Python, R and Matlab.

  3. Julia integrates well with existing code and platforms. Users of
    Python, R, Matlab and other languages can easily integrate their
    existing code into Julia.

  4. Elegant code. Julia was built from the ground up for
    mathematical, scientific and statistical computing, and has advanced
    libraries that make coding simple and fast, and dramatically reduce
    the number of lines of code required – in some cases, by 90%
    or more.

  5. Julia solves the two language problem. Because Julia combines
    the ease of use and familiar syntax of Python, R and Matlab with the
    speed of C, C++ or Java, programmers no longer need to estimate
    models in one language and reproduce them in a faster
    production language. This saves time and reduces error and cost.

Julia Computing was founded in 2015 by the creators of the open source Julia language to develop products and provide support for businesses and researchers who use Julia. Julia Computing’s founders are Viral Shah, Alan Edelman, Jeff Bezanson, Stefan Karpinski, Keno Fischer and Deepak Vinchhi.

A few examples of how Julia is being used today include:

BlackRock, the world’s largest asset manager, is using Julia to power their trademark Aladdin analytics platform.

Aviva, Britain’s second-largest insurer, is using Julia to make Solvency II compliance models run 1,000x faster using just 7% as much code as the legacy program it replaced.

“Solvency II compliant models in Julia are 1,000x faster than Algorithmics, use 93% fewer lines of code and took one-tenth the time to implement.” Tim Thornham, Director of Financial Solutions Modeling

Berkery Noyes is using Julia for mergers and acquisitions analysis.

“Julia is 20 times faster than Python, 100 times faster than R, 93 times faster than Matlab and 1.3 times faster than Fortran. What really excites us is that it’s interesting that you can write high-level, scientific and numerical computing but without having to re-translate that. Usually, if you have something in R or Matlab and you want to make it go faster, you have to re-translate it to C++, or some other faster language; with Julia, you don’t—it sits right on top.” Keith Lubell, CTO

UC Berkeley Autonomous Race Car (BARC) is using Julia for self-driving vehicle navigation.

“Julia has some amazing new features for our research. The port to ARM has made it easy for us to translate our research codes into real world applications.” Francesco Borrelli, Professor of Mechanical Engineering and co-director of the Hyundai Center of Excellence in Integrated Vehicle Safety Systems and Control at UC Berkeley

Federal Aviation Administration (FAA) and MIT Lincoln Labs are using Julia for the Next-Generation Aircraft Collision Avoidance System.

“The previous way of doing things was very costly. Julia is very easy to understand. It’s a very familiar syntax, which helps the reader understand the document with clarity, and it helps the writer develop algorithms that are concise. Julia resolves many of our conflicts, reduces cost during technology transfer, and because Julia is fast, it enables us to run the entire system and allows the specification to be executed directly. We continue to push Julia as a standard for specifications in the avionics industry. Julia is the right answer for us and exceeds all our needs.” Robert Moss, MIT Lincoln Labs

Augmedics is using Julia to give surgeons ‘x-ray vision’ via augmented reality.

“I stumbled upon Julia and gave it a try for a few days. I fell in love with the syntax, which is in so many ways exactly how I wanted it to be. The Julia community is helpful, Juno (the interactive development environment for Julia) is super-helpful. I don’t know how one can write without it. As a result, we are achieving much more and far more efficiently using Julia.” Tsur Herman, Senior Algorithms Developer

Path BioAnalytics is using Julia for personalized medicine.

“We were initially attracted to Julia because of the innovation we saw going on in the community. The computational efficiency and interactivity of the data visualization packages were exactly what we needed in order to process our data quickly and present results in a compelling fashion. Julia is instrumental to the efficient execution of multiple workflows, and with the dynamic development of the language, we expect Julia will continue to be a key part of our business going forward.” Katerina Kucera, Lead Scientist

Voxel8 is using Julia for 3D printing and drone manufacture.

“The expressiveness of a language matters. Being high level and having an ability to iterate quickly makes a major difference in a fast-paced innovative environment like at Voxel8. The speed at which we’ve been able to develop this has been incredible. If we were doing this in a more traditional language like C or C++, we wouldn’t be nearly as far as we are today with the number of developers we have, and we wouldn’t be able to respond nearly as quickly to customer feedback regarding what features they want. There is a large number of packages for Julia that we find useful. Julia is very stable – the core language is stable and fast and most packages are very stable.” Jack Minardi, Co-Founder and Software Lead

Federal Reserve Bank of New York and Nobel Laureate Thomas J. Sargent are using Julia to solve macroeconomic models 10x faster.

“We tested our code and found that the model estimation is about ten times faster with Julia than before, a very large improvement. Our ports (computer lingo for “translations”) of certain algorithms, such as Chris Sims’s gensys (which computes the model solution), also ran about six times faster in Julia than the … versions we had previously used.” Marco Del Negro, Marc Giannoni, Pearl Li, Erica Moszkowski and Micah Smith, Federal Reserve Bank of New York

“Julia is a great tool. We like Julia. We are very excited about Julia because our models are complicated. It’s easy to write the problem down, but it’s hard to solve it – especially if our model is high dimensional. That’s why we need Julia. Figuring out how to solve these problems requires some creativity. This is a walking advertisement for Julia.” Thomas J. Sargent, Nobel Laureate

Intel, Lawrence Berkeley National Laboratory, UC Berkeley and the National Energy Research Scientific Computing Center are using Julia for parallel supercomputing to increase the speed of astronomical image analysis 225x.

Barts Cancer Institute, Institute of Cancer Research, University College London and Queen Mary University of London are using Julia to model cancer genomes.

“Coming from using Matlab, Julia was pretty easy, and I was surprised by how easy it was to write pretty fast code. Obviously the speed, conciseness and dynamic nature of Julia is a big plus and the initial draw, but there are other perhaps unexpected benefits. For example, I’ve learned a lot about programming through using Julia. Learning Julia has helped me reason about how to write better and faster code. I think this is primarily because Julia is very upfront about why it can be fast and nothing is hidden away or “under the hood”. Also as most of the base language and packages are written in Julia, it’s great to be able to delve into what’s going on without running into a wall of C code, as might be the case in other languages. I think this is a big plus for its use in scientific research too, where we hope that our methods and conclusions are reproducible. Having a language that’s both fast enough to implement potentially sophisticated algorithms at a big scale but also be readable by most people is a great resource. Also, I find the code to be very clean looking, which multiple dispatch helps with a lot, and I like the ability to write in a functional style.” Marc Williams, Barts Cancer Institute, Queen Mary University of London and University College London

Reading DataFrames with non-UTF8 encoding in Julia

By: perfectionatic

Re-posted from: http://perfectionatic.org/?p=414

Recently I ran into problem where I was trying to read a CSV files from a Scandinavian friend into a DataFrame. I was getting errors it could not properly parse the latin1 encoded names.

I tried running

using DataFrames
dataT=readtable("example.csv", encoding=:latin1)

but the got this error

ArgumentError: Argument 'encoding' only supports ':utf8' currently.

The solution make use of (StringEncodings.jl)[https://github.com/nalimilan/StringEncodings.jl] to wrap the file data stream before presenting it to the readtable function.

s=StringDecoder(f,"LATIN1", "UTF-8")

The StringDecoder generates an IO stream that appears to be utf8 for the readtable function.

Tupper’s self-referential formula in Julia

By: perfectionatic

Re-posted from: http://perfectionatic.org/?p=399

I was surprised when I came across on Tupper’s formula on twitter. I felt the compulsion to implement it in Julia.

The formula is expressed as

{1\over 2} < \left\lfloor \mathrm{mod}\left(\left\lfloor {y \over 17} \right\rfloor 2^{-17 \lfloor x \rfloor - \mathrm{mod}(\lfloor y\rfloor, 17)},2\right)\right\rfloor

and yields bitmap facsimile of itself.

In [1]:
k=big"960 939 379 918 958 884 971 672 962 127 852 754 715 004 339 660 129 306 651 505 519 271 702 802 395 266 424 689 642 842 174 350 718 121 267 153 782 770 623 355 993 237 280 874 144 307 891 325 963 941 337 723 487 857 735 749 823 926 629 715 517 173 716 995 165 232 890 538 221 612 403 238 855 866 184 013 235 585 136 048 828 693 337 902 491 454 229 288 667 081 096 184 496 091 705 183 454 067 827 731 551 705 405 381 627 380 967 602 565 625 016 981 482 083 418 783 163 849 115 590 225 610 003 652 351 370 343 874 461 848 378 737 238 198 224 849 863 465 033 159 410 054 974 700 593 138 339 226 497 249 461 751 545 728 366 702 369 745 461 014 655 997 933 798 537 483 143 786 841 806 593 422 227 898 388 722 980 000 748 404 719"

In the above, the big integer is the magic number that lets us generate the image of the formula. I also need to setprecision of BigFloat to be very high, as rounding errors using the default precision does not get us the desired results. The implementation was inspired by the one in Python, but I see Julia a great deal more concise and clearer.

In [2]:
function tupper_field(k)
    for (ix,x) in enumerate(0.0:1:105.0), (iy,y) in enumerate(k:k+16)
In [3]:
using Images
img = colorview(Gray,.!f)

I just inverted the boolean array here to get the desired bitmap output.