Re-posted from: https://blog.glcs.io/package-testing
This post was written by Steven Whitaker.
The Julia programming languageis a high-level languagethat is known, at least in part,for its excellent package managerand outstanding composability.(See another blog post that illustrates this composability.)
Julia makes it super easyfor anybody to create their own package.Julia’s package manager enables easy development and testing of packages.The ease of package developmentencourages developers to split reusable chunks of codeinto individual packages,further enhancing Julia’s composability.
In our previous post,we discussed how to create and register your own package.However,to encourage people to actually use your package,it helps to have an assurancethat the package works.This is why testing is important.(Plus, you also want to know your package works, right?)
In this post,we will learn about some of the toolsJulia provides for testing packages.We will also learn how to use GitHub Actionsto run package testsagainst commits and/or pull requeststo check whether code changes break package functionality.
This post assumes you are comfortable navigating the Julia REPL.If you need a refresher,check out our post on the Julia REPL.
Example Package
We will use a custom package called Averages.jlto illustrate how to implement testing in Julia.
The Project.toml
looks like:
name = "Averages"uuid = "1fc6e63b-fe0f-463a-8652-42f2a29b8cc6"version = "0.1.0"[deps]Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"[extras]Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"[targets]test = ["Test"]
Note that this Project.toml
has two more sections besides [deps]
:
[extras]
is used to indicate additional packagesthat are not direct dependencies of the package.In this example,Test is not used in Averages.jl itself;Test is used only when running tests.[targets]
is used to specify what packages are used where.In this example,test = ["Test"]
indicates that the Test package should be usedwhen testing Averages.jl.
The actual package code in src/Averages.jl
looks like:
module Averagesusing Statisticsexport compute_averagecompute_average(x) = (check_real(x); mean(x))function compute_average(a, b...) check_real(a) N = length(a) for (i, x) in enumerate(b) check_real(x) check_length(i + 1, x, N) end T = float(promote_type(eltype(a), eltype.(b)...)) average = Vector{T}(undef, N) average .= a for x in b average .+= x end average ./= length(b) + 1 return a isa Real ? average[1] : averageendfunction check_real(x) T = eltype(x) T <: Real || throw(ArgumentError("only real numbers are supported; unsupported type $T"))endfunction check_length(i, x, expected) N = length(x) N == expected || throw(DimensionMismatch("the length of input $i does not match the length of the first input: $N != $expected"))endend
Adding Tests
Tests for a package live in test/runtests.jl
.(The file name is important!)Inside this file there are two main testing utilities that are used:@testset
and @test
.Additionally,@test_throws
can also be useful for testing.The Test standard library package provides all of these macros.
@testset
is used to organize tests into cohesive blocks.@test
is used to actually test package functionality.@test_throws
is used to ensure the package throws the errors it should.
Here is how test/runtests.jl
might look for Averages.jl:
using Averagesusing Test@testset "Averages.jl" begin a = [1, 2, 3] b = [4.0, 5.0, 6.0] c = (BigInt(7), 8f0, Int32(9)) d = 10 e = 11.0 bad = ["hi", "hello", "hey"] @testset "`compute_average(x)`" begin @test compute_average(a) == 2 @test compute_average(a) isa Float64 @test compute_average(c) == 8 @test compute_average(c) isa BigFloat @test compute_average(d) == 10 end @testset "`compute_average(a, b...)`" begin @test compute_average(a, a) == a @test compute_average(a, b) == [2.5, 3.5, 4.5] @test compute_average(a, b, c) == b @test compute_average(a, b, c) isa Vector{Float64} @test compute_average(b, b, b) == b @test compute_average(d, e) == 10.5 end @testset "Error Handling" begin @test_throws ArgumentError compute_average(im) @test_throws ArgumentError compute_average(a, bad) @test_throws ArgumentError compute_average(bad, c) @test_throws DimensionMismatch compute_average(a, b[1:2]) @test_throws DimensionMismatch compute_average(a[1:2], b) endend
Now let’s look more closely at the macros used:
@testset
can be given a labelto help organize the reporting Julia doesat the end of testing.Besides that,@testset
wraps around a set of tests(including other@testset
s).@test
is given an expressionthat evaluates to a boolean.If the boolean istrue
, the test passes;otherwise it fails.@test_throws
takes two inputs:an error type and then an expression.The test passes if the expressionthrows an error of the given type.
Testing Against Other Packages
In some cases,you might want to ensure your packageis compatible with a type defined in another package.For our example,let’s test against StaticArrays.jl.Our package does not depend on StaticArrays.jl,so we need to add it as a test-only dependencyby editing the [extras]
and [targets]
sectionsin the Project.toml
:
[extras]StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"[targets]test = ["StaticArrays", "Test"]
(Note that I grabbed the UUID for StaticArrays.jlfrom its Project.toml
on GitHub.)
Then we can add some teststo make sure compute_average
is generic enoughto work with StaticArray
s:
using Averagesusing Testusing StaticArrays@testset "Averages.jl" begin @testset "StaticArrays.jl" begin s = SA[12, 13, 14] @test compute_average(s) == 13 @test compute_average(s, s) == [12, 13, 14] @test compute_average(a, b, s) == [17/3, 20/3, 23/3] @test compute_average(s, a, c) == [20/3, 23/3, 26/3] endend
Running Tests Locally
Now Averages.jl is ready for testing.To run package tests on your own computer,start Julia, activate the package environment,and then run test
from the package prompt:
(@v1.X) pkg> activate /path/to/Averages(Averages) pkg> test
The first thing test
doesis set up a temporary package environment for testingthat includes the packages defined in the test
targetin the Project.toml
.Then it runs the tests and displays the result:
Testing Running tests...Test Summary: | Pass Total TimeAverages.jl | 20 20 0.7s Testing Averages tests passed
If a test fails,the result looks like this:
Testing Running tests...`compute_average(a, b...)`: Test Failed at /path/to/Averages/test/runtests.jl:27 Expression: compute_average(a, b) == [2.0, 3.5, 4.5] Evaluated: [2.5, 3.5, 4.5] == [2.0, 3.5, 4.5]Stacktrace: [1] macro expansion @ /path/to/julia-1.X.Y/share/julia/stdlib/v1.X/Test/src/Test.jl:672 [inlined] [2] macro expansion @ /path/to/Averages/test/runtests.jl:27 [inlined] [3] macro expansion @ /path/to/julia-1.X.Y/share/julia/stdlib/v1.X/Test/src/Test.jl:1577 [inlined] [4] macro expansion @ /path/to/Averages/test/runtests.jl:26 [inlined] [5] macro expansion @ /path/to/julia-1.X.Y/share/julia/stdlib/v1.X/Test/src/Test.jl:1577 [inlined] [6] top-level scope @ /path/to/Averages/test/runtests.jl:7Test Summary: | Pass Fail Total TimeAverages.jl | 19 1 20 0.9s `compute_average(x)` | 5 5 0.1s `compute_average(a, b...)` | 5 1 6 0.6s Error Handling | 5 5 0.0s StaticArrays.jl | 4 4 0.2sERROR: LoadError: Some tests did not pass: 19 passed, 1 failed, 0 errored, 0 broken.in expression starting at /path/to/Averages/test/runtests.jl:5ERROR: Package Averages errored during testing
Some things to note:
- When all tests in a test set pass,the test summary does not report the individual resultsof nested test sets.When a test fails,results of nested test sets are reported individuallyto report more precisely where the failure occurred.
- When a test fails,the file and line number of the failing test are reported,along with the expression that failed.This information is displayedfor all failures that occur.
- The test summary reports how many tests passed and how many failedin each test set,in addition to how long each test set took.
- Tests in a test set continue to run after a test fails.To have a test set stop on failure,use the
failfast
option:
(This option is available only in Julia 1.9 and later.)@testset failfast = true "Averages.jl" begin
Now, when developing Averages.jl,we can run the tests locallyto ensure we don’t break any functionality!
Running Tests with GitHub Actions
Besides running tests locally,one can use GitHub Actions to run testson one of GitHub’s servers.One advantageis that it enables automated testingon various machines/operating systemsand across various Julia versions.Automating tests in this way is an essential part of continuous integration (CI)(so much so that the phrase “running CI”is equivalent to “running tests via GitHub Actions”,even though CI technically involves more than just testing).
To enable testing via GitHub Actions,we just need to add an appropriate .yml
filein the .github/workflows
directory of our package.As mentioned in our previous post,PkgTemplates.jl can automatically generatethe necessary .yml
file.This is the default CI workflow generated by PkgTemplates.jl:
name: CIon: push: branches: - main tags: ['*'] pull_request: workflow_dispatch:concurrency: # Skip intermediate builds: always. # Cancel intermediate builds: only if it is a pull request build. group: ${{ github.workflow }}-${{ github.ref }} cancel-in-progress: ${{ startsWith(github.ref, 'refs/pull/') }}jobs: test: name: Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }} runs-on: ${{ matrix.os }} timeout-minutes: 60 permissions: # needed to allow julia-actions/cache to proactively delete old caches that it has created actions: write contents: read strategy: fail-fast: false matrix: version: - '1.10' - '1.6' - 'pre' os: - ubuntu-latest arch: - x64 steps: - uses: actions/checkout@v4 - uses: julia-actions/setup-julia@v2 with: version: ${{ matrix.version }} arch: ${{ matrix.arch }} - uses: julia-actions/cache@v2 - uses: julia-actions/julia-buildpkg@v1 - uses: julia-actions/julia-runtest@v1
For most users,the most relevant fields to customizeare version
and os
(under jobs: test: strategy: matrix
).Under os
,specify the operating systems to run tests on(e.g., ubuntu-latest
, windows-latest
, macOS-latest
).Under version
,specify the versions of Julia to use when testing:
'1.X'
means run on Julia 1.X.Y,where Y is the largest patchof Julia 1.X that has been released.For example,'1.9'
means run on Julia 1.9.4.'1'
means run on the latest stable version of Julia.'pre'
means run on the latest pre-release version of Julia.'lts'
means run on Julia’s long-term support (LTS) version.
Usually,it makes sense just to test '1'
and 'pre'
to ensure compatibility with the currentand upcoming Julia versions.
One can also fine-tune the version
and os
fields,as well as other fields,when generating a packagewith PkgTemplates.jl.For example,to generate the .yml
fileto run tests only on Windowswith Julia 1.8 and the latest pre-release version of Julia:
using PkgTemplatesgha = GitHubActions(; linux = false, windows = true, extra_versions = ["1.8", "pre"])t = Template(; dir = ".", plugins = [gha])t("MyPackage")
Note that the .yml
file generatedwill also include testing on Julia 1.6.The Template
constructor has a keyword argument julia
that sets the minimum version of Juliayou want your package to support,and this version is included in testing.As of this writing,by default the minimum version is Julia 1.6.
See the PkgTemplates.jl docsabout Template
and GitHubActions
for more detailson customizing the .yml
file.See also the GitHub Actions docs,and in particular the workflow syntax docs,for more details on what makes up the .yml
file.(Be warned, these docs are quite lengthyand probably aren’t practically usefulfor most people to get a CI workflow up and running.For a more approachable overview of the .yml
file,consider looking at this tutorial for building and testing Python.)
Once we push .github/workflows/CI.yml
to GitHub,whenever branch main
is pushed to,or a pull request (PR) is opened or pushed to,our package’s tests will run.This is the essence of CI:continuously making sure changes we make to our codeintegrate well with the code base(i.e., don’t break anything).By running tests against PRs,we can be sure changes madedon’t break existing functionality.
One neat thing about GitHub Actionsis that GitHub provides a status badge/iconthat you can display in your package’s README.This badge lets people know
- that your package is regularly tested, and
- whether the current state of your package passes those tests.
In other words,this badge is a good wayto boost confidence that your package is suitable for use.You can add this badge to your package’s READMEby adding something like the following markdown:
[![CI](https://github.com/username/Averages.jl/actions/workflows/CI.yml/badge.svg)](https://github.com/username/Averages.jl/actions/workflows/CI.yml)
And it will display as follows:
Summary
In this post,we learned how to add teststo our own Julia package.We also learned how to enable CI with GitHub Actionsto run our tests against code changesto ensure our package remains in working order.
How difficult was it for you to set up CI for the first time?Do you have any tips for beginners?Let us know in the comments below!
Additional Links
- Julia Testing Docs
- Official Julia documentation on testing.
- PkgTemplates.jl Docs
- Documentation for PkgTemplates.jl,including potential customizations to the generated CI workflow.