Tag Archives: Open Source

Julia and MATLAB can coexist. Let us show you how.

By: Great Lakes Consulting

Re-posted from: https://blog.glcs.io/juliacon-2025-preview

This post was written by Steven Whitaker.

Have you ever wished you could start using the Julia programming languageto develop custom models?Does the idea of replacingoutdated MATLAB code and modelsseem overwhelming?

Or maybe you don’t plan to replace all MATLAB code,but wouldn’t it be excitingto integrate Julia codeinto existing workflows?

Also, technicalities aside,how do you convince your colleaguesto make the leapinto the Julia ecosystem?

I’m excited to sharean announcement!At this year’s JuliaCon,I will be speaking abouta small but significant stepyou can take to start adding Juliato your MATLAB codebase.

Great news!You can transition to Julia smoothlywithout completely abandoning MATLAB.There’s a straightforward methodto embrace the best of both worlds,so you won’t needto rewrite your legacy models from scratch.

I’ll give my full talk in July,but if you don’t want to wait,keep readingfor a sneak peek!

Background

The GLCS.io teamhas been developing Julia-based solutions since 2015.Over the past 4 years,we’ve had the pleasure of redesigning and enhancing Julia modelsfor our clients in the finance, science, and engineering sectors.Its incredible speed and versatility have transformedhow we tackle complex computations together.However,we also fully acknowledge the reality:MATLAB continues to hold a significant placein countless companies and research labs worldwide.

For decades,MATLAB has been the benchmarkfor data analysis, modeling, and simulationacross scientific and engineering fields.There are likely hundreds of thousands of MATLAB licenses in use,with millions of userssupporting an unimaginable number of models and codebases.

Even for a single company,fully transitioning to Juliaoften feels insurmountable.The vast amount of existing MATLAB codepresents a significant challenge for any team considering adopting Julia.

Yet, unlocking Julia’s power is vital for companiesaiming to excel in today’s competitive landscape.The question isn’t if companiesshould adopt Julia—it’s how to do it.

Companies should blend Juliawith their MATLAB environments,ensuring minimal disruption and optimal resource use.This strategic integrationdelivers meaningful gainsin accuracy, performance, and scalabilityto transform operations and drive success.

JuliaCon Preview

At JuliaCon,I’m excited to share how youcan seamlessly integrate Juliainto existing MATLAB workflows—a processthat has delivered up to 100x performance improvementswhile enhancing code quality and functionality.Through a real-world model,I’ll highlight design patterns,benchmark comparisons,and valuable business case insightsto demonstrate the transformative potential of integrating Julia.

(Spoiler alert:the performance improvement is more than 100xfor the example I will show at JuliaCon.)

What We Offer

Unlock high-performance modeling!Our dedicated team is hereto integrate Julia into your MATLAB workflows.Experience a strategic, step-by-step process tailoredfor seamless Julia-MATLAB integration,focused on efficiency and delivering measurable results:

  1. Tailored Assessment:Pinpoint challenges and opportunities for Julia to address.
  2. MATLAB Benchmarking:Establish a performance baseline to measure progress and impact.
  3. Julia Model Development:Convert MATLAB models to Juliaor assist your team in doing so.
  4. Julia Integration:Combine Julia’s capabilities with your existing MATLAB workflows for optimal results.
  5. Roadmap Alignment:Validate performance improvements,create a strong business case for leadership,and agree on future support and innovation.

Check out our website for more details.

Summary

By attending my JuliaCon talk,you will learnhow to seamlessly integrate Juliainto your existing MATLAB codebase.And by leveraging our support at GLCS,you can adopt Juliawithout disruption—unlocking faster computations,improved models,and better scalabilitywhile retaining the strengthsof your MATLAB codebase.

Are you or someone you knowexcited about harnessing the power of Julia and MATLAB together?Let’s connect! Schedule a consultation todayto discover incredible performance gains of 100x or more.

Additional Links

MATLAB is a registered trademarkof The MathWorks, Inc.

Cover image:The JuliaCon 2025 logowas obtained from https://juliacon.org/2025/.

]]>

Going from 98% to 99.9% in AI is where all the work is

By: Logan Kilpatrick

Re-posted from: https://medium.com/around-the-prompt/going-from-98-to-99-9-in-ai-is-where-all-the-work-is-ff7f1adff6e4?source=rss-2c8aac9051d3------2

How to build in the age of AI, advice from Chamath Palihapitiya

Image created by Author and Imagen 3

There are lots of phenomena happening in AI right now. On one hand, going from idea to code to working app has never been easier. AI has proved it can dramatically accelerate the creation of very good demos / MVPs. But where is the value created in the world? I would posit that much of it comes down to actually making things work in production. This is more true now than ever when the barrier for entry in AI continues to go down.

Tools like https://bolt.new, https://lovable.dev/, https://v0.dev and others are enabling this new wave of accelerated software creation. For the long tail of builders, these tools work very well, but one of the main limitations is how to capture the “cartilage” that makes lots of companies actually work. I had a conversation with Chamath Palihapitiya about this, and he did a great job of capturing the state of this:

So how do we get the last 2% and make some of these more difficult problems work? This is the $1,000,000 question. Right now, it still takes a lot of human work in order to translate super complex legacy processes into something powered by AI. Part of my inclination is that agents might be helpful to do this, but as Chamath mentioned, it’s likely this is going to be a “10 year process”.

One of the things I like to think about is the bitter lesson, which if folks have not heard about this can be summarized as the fact that general purpose approaches usually win out vs specialized approaches in technology specifially. In the content of getting this last 2% of reliability, you might imagine that what you go do is build a bunch of scaffolding, 100 different vertical agents, or even completely re-engineer some human system in order to work well for the age of AI. A lot of this depends on your timelines, but if you believe that model capabilities will keep scaling and generalizing to solve new problems, it is worth considering how much of an investment you should make into any one of those today, vs just waiting for the models to get good enough and solve the problem out of the box for you. The caveat here is the level of agency you should take vs waiting for the innovation to come to you is likely a factor of how much this change is going to disrupt you. If the chance is high, then you should pay the cost of building the scaffolding, doing the process re-engineering, etc in order to migrate the risk of large scale change.

At the same time as of all that is true, I was reminded by Sully this morning of just how beautiful it is that the barrier to creating software has come down 10x in the last 2 years, and what you can build has increased by 10x. The only thing that is stopping you is having an idea and the desire to solve the problem.

So yeah, solving problems in large legacy systems is not easy (regulated industries, large companies, etc), but if you just want to build 0 to 1, there has never been a better time in human history than today to do so. So go build something people want, bet on the models progressing, and make the world better along the axis you care about.


Going from 98% to 99.9% in AI is where all the work is was originally published in Around the Prompt on Medium, where people are continuing the conversation by highlighting and responding to this story.

Best Practices for Testing Your Julia Packages

By: Great Lakes Consulting

Re-posted from: https://blog.glcs.io/package-testing

This post was written by Steven Whitaker.

The Julia programming languageis a high-level languagethat is known, at least in part,for its excellent package managerand outstanding composability.(See another blog post that illustrates this composability.)

Julia makes it super easyfor anybody to create their own package.Julia’s package manager enables easy development and testing of packages.The ease of package developmentencourages developers to split reusable chunks of codeinto individual packages,further enhancing Julia’s composability.

In our previous post,we discussed how to create and register your own package.However,to encourage people to actually use your package,it helps to have an assurancethat the package works.This is why testing is important.(Plus, you also want to know your package works, right?)

In this post,we will learn about some of the toolsJulia provides for testing packages.We will also learn how to use GitHub Actionsto run package testsagainst commits and/or pull requeststo check whether code changes break package functionality.

This post assumes you are comfortable navigating the Julia REPL.If you need a refresher,check out our post on the Julia REPL.

Example Package

We will use a custom package called Averages.jlto illustrate how to implement testing in Julia.

The Project.toml looks like:

name = "Averages"uuid = "1fc6e63b-fe0f-463a-8652-42f2a29b8cc6"version = "0.1.0"[deps]Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"[extras]Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"[targets]test = ["Test"]

Note that this Project.toml has two more sections besides [deps]:

  • [extras] is used to indicate additional packagesthat are not direct dependencies of the package.In this example,Test is not used in Averages.jl itself;Test is used only when running tests.
  • [targets] is used to specify what packages are used where.In this example,test = ["Test"] indicates that the Test package should be usedwhen testing Averages.jl.

The actual package code in src/Averages.jl looks like:

module Averagesusing Statisticsexport compute_averagecompute_average(x) = (check_real(x); mean(x))function compute_average(a, b...)    check_real(a)    N = length(a)    for (i, x) in enumerate(b)        check_real(x)        check_length(i + 1, x, N)    end    T = float(promote_type(eltype(a), eltype.(b)...))    average = Vector{T}(undef, N)    average .= a    for x in b        average .+= x    end    average ./= length(b) + 1    return a isa Real ? average[1] : averageendfunction check_real(x)    T = eltype(x)    T <: Real || throw(ArgumentError("only real numbers are supported; unsupported type $T"))endfunction check_length(i, x, expected)    N = length(x)    N == expected || throw(DimensionMismatch("the length of input $i does not match the length of the first input: $N != $expected"))endend

Adding Tests

Tests for a package live in test/runtests.jl.(The file name is important!)Inside this file there are two main testing utilities that are used:@testset and @test.Additionally,@test_throws can also be useful for testing.The Test standard library package provides all of these macros.

  • @testset is used to organize tests into cohesive blocks.
  • @test is used to actually test package functionality.
  • @test_throws is used to ensure the package throws the errors it should.

Here is how test/runtests.jl might look for Averages.jl:

using Averagesusing Test@testset "Averages.jl" begin    a = [1, 2, 3]    b = [4.0, 5.0, 6.0]    c = (BigInt(7), 8f0, Int32(9))    d = 10    e = 11.0    bad = ["hi", "hello", "hey"]    @testset "`compute_average(x)`" begin        @test compute_average(a) == 2        @test compute_average(a) isa Float64        @test compute_average(c) == 8        @test compute_average(c) isa BigFloat        @test compute_average(d) == 10    end    @testset "`compute_average(a, b...)`" begin        @test compute_average(a, a) == a        @test compute_average(a, b) == [2.5, 3.5, 4.5]        @test compute_average(a, b, c) == b        @test compute_average(a, b, c) isa Vector{Float64}        @test compute_average(b, b, b) == b        @test compute_average(d, e) == 10.5    end    @testset "Error Handling" begin        @test_throws ArgumentError compute_average(im)        @test_throws ArgumentError compute_average(a, bad)        @test_throws ArgumentError compute_average(bad, c)        @test_throws DimensionMismatch compute_average(a, b[1:2])        @test_throws DimensionMismatch compute_average(a[1:2], b)    endend

Now let’s look more closely at the macros used:

  • @testset can be given a labelto help organize the reporting Julia doesat the end of testing.Besides that,@testset wraps around a set of tests(including other @testsets).
  • @test is given an expressionthat evaluates to a boolean.If the boolean is true, the test passes;otherwise it fails.
  • @test_throws takes two inputs:an error type and then an expression.The test passes if the expressionthrows an error of the given type.

Testing Against Other Packages

In some cases,you might want to ensure your packageis compatible with a type defined in another package.For our example,let’s test against StaticArrays.jl.Our package does not depend on StaticArrays.jl,so we need to add it as a test-only dependencyby editing the [extras] and [targets] sectionsin the Project.toml:

[extras]StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"[targets]test = ["StaticArrays", "Test"]

(Note that I grabbed the UUID for StaticArrays.jlfrom its Project.toml on GitHub.)

Then we can add some teststo make sure compute_average is generic enoughto work with StaticArrays:

using Averagesusing Testusing StaticArrays@testset "Averages.jl" begin        @testset "StaticArrays.jl" begin        s = SA[12, 13, 14]        @test compute_average(s) == 13        @test compute_average(s, s) == [12, 13, 14]        @test compute_average(a, b, s) == [17/3, 20/3, 23/3]        @test compute_average(s, a, c) == [20/3, 23/3, 26/3]    endend

Running Tests Locally

Now Averages.jl is ready for testing.To run package tests on your own computer,start Julia, activate the package environment,and then run test from the package prompt:

(@v1.X) pkg> activate /path/to/Averages(Averages) pkg> test

The first thing test doesis set up a temporary package environment for testingthat includes the packages defined in the test targetin the Project.toml.Then it runs the tests and displays the result:

     Testing Running tests...Test Summary: | Pass  Total  TimeAverages.jl   |   20     20  0.7s     Testing Averages tests passed

If a test fails,the result looks like this:

     Testing Running tests...`compute_average(a, b...)`: Test Failed at /path/to/Averages/test/runtests.jl:27  Expression: compute_average(a, b) == [2.0, 3.5, 4.5]   Evaluated: [2.5, 3.5, 4.5] == [2.0, 3.5, 4.5]Stacktrace: [1] macro expansion   @ /path/to/julia-1.X.Y/share/julia/stdlib/v1.X/Test/src/Test.jl:672 [inlined] [2] macro expansion   @ /path/to/Averages/test/runtests.jl:27 [inlined] [3] macro expansion   @ /path/to/julia-1.X.Y/share/julia/stdlib/v1.X/Test/src/Test.jl:1577 [inlined] [4] macro expansion   @ /path/to/Averages/test/runtests.jl:26 [inlined] [5] macro expansion   @ /path/to/julia-1.X.Y/share/julia/stdlib/v1.X/Test/src/Test.jl:1577 [inlined] [6] top-level scope   @ /path/to/Averages/test/runtests.jl:7Test Summary:                | Pass  Fail  Total  TimeAverages.jl                  |   19     1     20  0.9s  `compute_average(x)`       |    5            5  0.1s  `compute_average(a, b...)` |    5     1      6  0.6s  Error Handling             |    5            5  0.0s  StaticArrays.jl            |    4            4  0.2sERROR: LoadError: Some tests did not pass: 19 passed, 1 failed, 0 errored, 0 broken.in expression starting at /path/to/Averages/test/runtests.jl:5ERROR: Package Averages errored during testing

Some things to note:

  • When all tests in a test set pass,the test summary does not report the individual resultsof nested test sets.When a test fails,results of nested test sets are reported individuallyto report more precisely where the failure occurred.
  • When a test fails,the file and line number of the failing test are reported,along with the expression that failed.This information is displayedfor all failures that occur.
  • The test summary reports how many tests passed and how many failedin each test set,in addition to how long each test set took.
  • Tests in a test set continue to run after a test fails.To have a test set stop on failure,use the failfast option:
    @testset failfast = true "Averages.jl" begin
    (This option is available only in Julia 1.9 and later.)

Now, when developing Averages.jl,we can run the tests locallyto ensure we don’t break any functionality!

Running Tests with GitHub Actions

Besides running tests locally,one can use GitHub Actions to run testson one of GitHub’s servers.One advantageis that it enables automated testingon various machines/operating systemsand across various Julia versions.Automating tests in this way is an essential part of continuous integration (CI)(so much so that the phrase “running CI”is equivalent to “running tests via GitHub Actions”,even though CI technically involves more than just testing).

To enable testing via GitHub Actions,we just need to add an appropriate .yml filein the .github/workflows directory of our package.As mentioned in our previous post,PkgTemplates.jl can automatically generatethe necessary .yml file.This is the default CI workflow generated by PkgTemplates.jl:

name: CIon:  push:    branches:      - main    tags: ['*']  pull_request:  workflow_dispatch:concurrency:  # Skip intermediate builds: always.  # Cancel intermediate builds: only if it is a pull request build.  group: ${{ github.workflow }}-${{ github.ref }}  cancel-in-progress: ${{ startsWith(github.ref, 'refs/pull/') }}jobs:  test:    name: Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }}    runs-on: ${{ matrix.os }}    timeout-minutes: 60    permissions: # needed to allow julia-actions/cache to proactively delete old caches that it has created      actions: write      contents: read    strategy:      fail-fast: false      matrix:        version:          - '1.10'          - '1.6'          - 'pre'        os:          - ubuntu-latest        arch:          - x64    steps:      - uses: actions/checkout@v4      - uses: julia-actions/setup-julia@v2        with:          version: ${{ matrix.version }}          arch: ${{ matrix.arch }}      - uses: julia-actions/cache@v2      - uses: julia-actions/julia-buildpkg@v1      - uses: julia-actions/julia-runtest@v1

For most users,the most relevant fields to customizeare version and os(under jobs: test: strategy: matrix).Under os,specify the operating systems to run tests on(e.g., ubuntu-latest, windows-latest, macOS-latest).Under version,specify the versions of Julia to use when testing:

  • '1.X' means run on Julia 1.X.Y,where Y is the largest patchof Julia 1.X that has been released.For example,'1.9' means run on Julia 1.9.4.
  • '1' means run on the latest stable version of Julia.
  • 'pre' means run on the latest pre-release version of Julia.
  • 'lts' means run on Julia’s long-term support (LTS) version.

Usually,it makes sense just to test '1' and 'pre'to ensure compatibility with the currentand upcoming Julia versions.

One can also fine-tune the version and os fields,as well as other fields,when generating a packagewith PkgTemplates.jl.For example,to generate the .yml fileto run tests only on Windowswith Julia 1.8 and the latest pre-release version of Julia:

using PkgTemplatesgha = GitHubActions(; linux = false, windows = true, extra_versions = ["1.8", "pre"])t = Template(; dir = ".", plugins = [gha])t("MyPackage")

Note that the .yml file generatedwill also include testing on Julia 1.6.The Template constructor has a keyword argument juliathat sets the minimum version of Juliayou want your package to support,and this version is included in testing.As of this writing,by default the minimum version is Julia 1.6.

See the PkgTemplates.jl docsabout Template and GitHubActionsfor more detailson customizing the .yml file.See also the GitHub Actions docs,and in particular the workflow syntax docs,for more details on what makes up the .yml file.(Be warned, these docs are quite lengthyand probably aren’t practically usefulfor most people to get a CI workflow up and running.For a more approachable overview of the .yml file,consider looking at this tutorial for building and testing Python.)

Once we push .github/workflows/CI.yml to GitHub,whenever branch main is pushed to,or a pull request (PR) is opened or pushed to,our package’s tests will run.This is the essence of CI:continuously making sure changes we make to our codeintegrate well with the code base(i.e., don’t break anything).By running tests against PRs,we can be sure changes madedon’t break existing functionality.

One neat thing about GitHub Actionsis that GitHub provides a status badge/iconthat you can display in your package’s README.This badge lets people know

  1. that your package is regularly tested, and
  2. whether the current state of your package passes those tests.

In other words,this badge is a good wayto boost confidence that your package is suitable for use.You can add this badge to your package’s READMEby adding something like the following markdown:

[![CI](https://github.com/username/Averages.jl/actions/workflows/CI.yml/badge.svg)](https://github.com/username/Averages.jl/actions/workflows/CI.yml)

And it will display as follows:

GitHub CI badge

Summary

In this post,we learned how to add teststo our own Julia package.We also learned how to enable CI with GitHub Actionsto run our tests against code changesto ensure our package remains in working order.

How difficult was it for you to set up CI for the first time?Do you have any tips for beginners?Let us know in the comments below!

Additional Links

]]>