Author Archives: Steven Whitaker

Julia 1.11: Top Features and Important Updates

By: Steven Whitaker

Re-posted from: https://blog.glcs.io/julia-1-11

A new version of the Julia programming languagewas just released!Version 1.11 is now the latest stable version of Julia.

This release is a minor release,meaning it includes language enhancementsand bug fixesbut should also be fully compatiblewith code written in previous Julia versions(from version 1.0 and onward).

In this post,we will check out some of the features and improvementsintroduced in this newest Julia version.Read the full post,or click on the links belowto jump to the features that interest you.

If you are new to Julia(or just need a refresher),feel free to check out our Julia tutorial series,beginning with how to install Julia and VS Code.

Improved Clarity of Public API with public Keyword

An important aspect of semantic versioning(aka SemVer, which Julia and Julia packages follow)is the public API users have access to.In essence,SemVer states that minor package updatesshould not break compatibilitywith existing code that uses the public API.However,other parts of the code are free to changein a minor update.To illustrate:

  • If I have sum([1, 2, 3]) in my codethat I wrote in Julia 1.10,it will continue to return 6 in Julia 1.11because sum is part of Julia’s public API.But it could break in Julia 2.0.(Hopefully not, though!)
  • If I have SomePackage._internal_function(0) in my codethat I wrote with SomePackage v1.2.0,it might error when SomePackage upgrades to v1.2.1(because, for example, _internal_function got deleted).Such a change would be allowedbecause _internal_function is not part of the public API.

So, the question is,how does a user knowwhat the public API of a package is?Historically,there have been some conventions followed:

  • Names that are exported(e.g., export DataFrame)are part of the public API.
  • Unexported names that are prefixed with an underscore(e.g., SomePackage._internal_function)are not part of the public API.

But what about a function like ForwardDiff.gradient?That function is the reason why 99%of users load the ForwardDiff package,but it’s not exported!The good news is that it’s still part of the public APIbecause, well, ForwardDiff’s maintainers say so.Or maybe it’s because the documentation says so.Or maybe it’s because enough people use it?Sometimes it’s not entirely clear.

But now in Julia 1.11,your the code can indicatethe public API!This is thanks to a new public keyword.Now,all symbols that are documented with publicare part of the public API.(Note that this is in addition to exported symbols,i.e., func would be considered public APIwith either export func or public func.)

Usage of public is the same as export.For example:

module MyPackagepublic add1"""Docstring for a public function."""add1(x) = x + 1"""Docstring for a private function."""private_add1(x) = x + 1end

With using MyPackage,no symbols are made available (except MyPackage),e.g., add1 can only be called via MyPackage.add1,because nothing is exported with export.And both MyPackage.add1(1) and MyPackage.private_add1(1) work,even though add1 is public and private_add1 is private.So,the public keyword doesn’t change how MyPackage works or is used.

However,the public keyword does change some behaviors.The most notable difference iswhen displaying documentation in the REPL’s help mode:

help?> MyPackage.add1  Docstring for a public function.help?> MyPackage.private_add1   Warning      The following bindings may be internal; they may change or be removed in future versions:          MyPackage.private_add1  Docstring for a private function.

See the warning with private_add1?Such warnings,in addition to the more straightforward documentation of the public API,may help reduce usage of package internals(particularly accidental usage),which in turn may help improvethe stability of Julia packages.

To summarize,even though the public keyworddoesn’t change how a package works or is used,it does provide a mechanism for clearly stating the public APIand providing warnings when viewing the documentationfor internal functions.This, in turn,may improve the stability of Julia packagesas they adhere more closelyto public APIs.

Read more about public in the Julia manual.

Standardized Entry Point for Scripts

There is now a standardized entry pointwhen running scripts from the command line.

Julia 1.11 introduces the @main macro.This macro, when placed in a script,and when the script is run via the command line,tells Julia to run a function called mainafter running the code in the script.main will be passed a Vector of Stringscontaining the command line arguments passed to the script.

To use @main,just include it in your scripton its own lineafter defining your main function.(If @main occurs before defining main,for example at the top of the script,an error will be thrown,so the ordering matters.)

Of course,for this to workthere has to be a function mainwith a method that takes a Vector{String} as the only input.

Let’s look at an exampleto illustrate how this works.Say we have the following in test.jl:

print_val(x) = print(x)function main(args)    print_val(args)end@main

If we run this file in the REPLwith include("test.jl"),the functions print_val and mainwill be defined,but main will not get called.This is the same behavioras when @main is not present.

On the other hand,if we run this file via the command linewith julia test.jl,the functions print_val and mainwill be definedand then main will be calledwith the command line argumentsas the input.To illustrate:

  • julia test.jl will call main(String[])(because no command line arguments were passed).
  • julia test.jl 1 hello will call main(["1", "hello"]).

As a result of @main,a Julia file can have different behaviordepending on whether it is run as a script or not.

If you’re familiar with Python,@main might remind you ofif __name__ == "__main__".However,there is one significant difference:

  • In Python,if script1.py imports script2.pyand script2.py has the “if main” check,running script1.py as a scriptwill not run script2.py‘s “if main” code.
  • In Julia,if script1.jl includes script2.jland script2.jl uses @main,running script1.jl as a scriptwill run script2.jl‘s main function.(Technicality:Unless script1.jl defines an appropriate main method,in which case script1.jl‘s main would be called,even if script1.jl did not include @main.)

This isn’t to say Julia’s @main is bad or wrong;it’s just important to know that it works differentlythan Python.And it’s still cool to have a standardized entry pointfor Julia scripts now!

Read more about @main in the Julia manual.

Improved String Styling

Julia 1.11 introduces a new StyledStrings.jlstandard library package.This package provides a convenient wayto add styling to strings.StyledStrings makes printing styled stringsmuch easier than calling printstyled,particularly when different parts of the stringhave different styles.

The easiest way to create a styled stringis with styled"...".For example:

using StyledStringsstyled_string = styled"{italic:This} is a {bold,bright_cyan:styled string}!"

Then, when printing the styled string,it will display according to the provided annotations.

Also, because the style information is stored with the string,it can easily be preserved across string manipulationssuch as string concatenation or grabbing a substring.

Check out the documentationfor more informationabout the variety of different annotationsStyledStrings supports.

And here’s some more informationfrom the State of Julia talk at JuliaCon 2024:

Slide about StyledStrings

New Functions for Testing Existence of Documentation

Julia 1.11 makes it easyto determine programmatically whether a function has a docstring.This can be useful for, e.g., CI checksto ensure a package is well documented.

There are two functions for this purpose.The first is Docs.hasdoc,which is used to query a particular function.hasdoc takes two inputs:the module to look inand the name (as a Symbol) of the function.For example:

julia> Docs.hasdoc(Base, :sum)true

The other function providedis Docs.undocumented_names,which returns a list of a module’s public namesthat have no docstrings.(Note that public names include symbols exported via exportas well as symbols declared as public via public.)For example:

julia> module Example       export f1, f4       public f2, f5       "Exported, documented"       f1() = 1       "Public, documented"       f2() = 2       "Internal, documented"       f3() = 3       # Exported, undocumented       f4() = 4       # Public, undocumented       f5() = 5       # Internal, undocumented       f6() = 6       endMain.Example# Note that `f6` is not returned because it is neither exported nor public.julia> Docs.undocumented_names(Example)3-element Vector{Symbol}: :Example :f4 :f5

It will be interesting to see what tooling arisesto take advantage of these functions.

More Complete timed Macro

The @timed macroprovides more complete timing informationin Julia 1.11.

Previously,@timed gave run time and allocation/garbage collection information,but nothing about compilation time.Now, compilation time is included.

But why care about @timedwhen @time already gave all that info?Because @time is hard-coded to print to stdout,meaning there’s no way to capture the information,e.g., for logging purposes.

I actually had a project where I wanted to redirect the output of @timeto a log file.I couldn’t just use redirect_stdiobecause that would also redirect the outputof the code being timed.I ended up using @timed along with Base.time_printto create the log statements,but I was disappointed @timed didn’t give me compilation time information.Well, now it does!

New Convenience Function logrange

Pop quiz:Which of the followingis the correct wayto create a logarithmically spaced range of numbers?

  1. log.(range(exp(a), exp(b), N))
  2. exp.(range(log(a), log(b), N))

I have occasionally neededto use logarithmically spaced ranges of numbers,not so frequently that I memorized which expression to use,but frequently enough that I developed a real distastefor the mental gymnastics I had to go throughevery time just to rememberwhere to put the exps and logs.Maybe I should have just taken some timeto memorize the answer…

But now it doesn’t matter!The correct answer is neither 1 nor 2,but logrange(a, b, N)!Here’s an example usage:

julia> logrange(1, 10, 5)5-element Base.LogRange{Float64, Base.TwicePrecision{Float64}}: 1.0, 1.77828, 3.16228, 5.62341, 10.0

I know it’s a fairly minor change,but the addition of logrange in Julia 1.11is probably the change I’m most excited about.There was much rejoicing when I saw the news!

Summary

In this post,we learned aboutsome of the new featuresand improvementsintroduced in Julia 1.11.Curious readers cancheck out the release notesfor the full list of changes.

Note also that with the new release,Julia 1.10 will now become the LTS (long-term support) version,replacing Julia 1.6.As a result,Julia 1.10 will receive maintenance updatesinstead of Julia 1.6(which has now reached end of support)until the next LTS version is announced.If you want to learn more about what changes Julia 1.10 brought,check out our post!

What are you most excited aboutin Julia 1.11?Let us know in the comments below!

Additional Links

]]>

Mastering Efficient Array Operations with StaticArrays.jl in Julia

By: Steven Whitaker

Re-posted from: https://blog.glcs.io/staticarrays

The Julia programming languageis known for being a high-level languagethat can still compete with Cin terms of performance.As such,Julia already has performant data structures built-in,such as arrays.But what if arrays could be even faster?That’s where the StaticArrays.jl package comes in.

StaticArrays.jl provides drop-in replacements for Array,the standard Julia array type.These StaticArrays work just like Arrays,but they provide one additional piece of informationin the type:the size of the array.Consequently,you can’t insert or remove elements of a StaticArray;they are statically sized arrays(hence the name).However,this restriction allows more informationto be given to Julia’s compiler,which in turn results in more efficient machine code(for example, via loop unrolling and SIMD operations).The resulting speed-up can often be 10x or more!

In this post,we will learn how to use StaticArrays.jland compare the performance of StaticArraysto that of regular Arraysfor several different operations.

Note that the code examples in this postassume StaticArrays.jl has been installed and loaded:

# Press ] to enter the package prompt.pkg> add StaticArrays# Press Backspace to return to the Julia prompt.julia> using StaticArrays

(Check out our post on the Julia REPLfor more details about the package promptand navigating the REPL.)

How to Use StaticArrays.jl

When working with StaticArrays.jl,typically one will use the SVector typeor the SMatrix type.(There is also the SArray type for N-dimensional arrays,but we will focus on 1D and 2D arrays in this post.)SVectors and SMatrixes have both static sizeand static data,meaning the data contained in such objectscannot be modified.For statically sized arrayswhose contents can be modified,StaticArrays.jl provides MVector and MMatrix (and MArray).We will stick with SVectors and SMatrixes in this postunless we specifically need mutability.

Constructors

There are three ways to construct StaticArrays.

  1. Convenience constructor SA:

    julia> SA[1, 2, 3]3-element SVector{3, Int64} with indices SOneTo(3): 1 2 3julia> SA[1 2; 3 4]22 SMatrix{2, 2, Int64, 4} with indices SOneTo(2)SOneTo(2): 1  2 3  4
  2. Normal constructor functions:

    julia> SVector(1, 2)2-element SVector{2, Int64} with indices SOneTo(2): 1 2julia> SMatrix{2,3}(1, 2, 3, 4, 5, 6)23 SMatrix{2, 3, Int64, 6} with indices SOneTo(2)SOneTo(3): 1  3  5 2  4  6
  3. Macros:

    julia> @SVector [1, 2, 3]3-element SVector{3, Int64} with indices SOneTo(3): 1 2 3julia> @SMatrix [1 2; 3 4]22 SMatrix{2, 2, Int64, 4} with indices SOneTo(2)SOneTo(2): 1  2 3  4

    Note that using macrosalso enables a convenient wayto create StaticArrays from common array-creation functions(eliminating the need to create an Array firstjust to convert it immediately to a StaticArray):

    @SVector [10 * i for i = 1:10]@SVector zeros(5)@SVector rand(7)@SMatrix [(i, j) for i = 1:2, j = 1:3]@SMatrix zeros(2, 2)@SMatrix randn(6, 6)

Conversion to/from Array

It may occasionally be necessaryto convert to or from Arrays.To convert from an Array to a StaticArray,use the appropriate constructor function.However, because Arrays do not have size information in the type,we ourselves must provide the size to the constructor:

SVector{3}([1, 2, 3])SMatrix{4,4}(zeros(4, 4))

To convert back to an Array, use the collect function:

julia> collect(SVector(1, 2))2-element Vector{Int64}: 1 2

Comparing StaticArrays to Arrays

Once a StaticArray is created,it can be operated on in the same wayas an Array.To illustrate,we will run a simple benchmark,both to compare the run-time speedsof the two types of arraysand to show that the same code can workwith either type of array.

Stopwatch

Here’s the benchmark code,inspired by StaticArrays.jl’s benchmark:

using BenchmarkTools, StaticArrays, LinearAlgebra, Printfadd!(C, A, B) = C .= A .+ Bfunction run_benchmarks(N)    A = rand(N, N); A = A' * A    B = rand(N, N)    C = Matrix{eltype(A)}(undef, N, N)    D = rand(N)    SA = SMatrix{N,N}(A)    SB = SMatrix{N,N}(B)    MA = MMatrix{N,N}(A)    MB = MMatrix{N,N}(B)    MC = MMatrix{N,N}(C)    SD = SVector{N}(D)    speedup = [        @belapsed($A + $B) / @belapsed($SA + $SB),        @belapsed(add!($C, $A, $B)) / @belapsed(add!($MC, $MA, $MB)),        @belapsed($A * $B) / @belapsed($SA * $SB),        @belapsed(mul!($C, $A, $B)) / @belapsed(mul!($MC, $MA, $MB)),        @belapsed(norm($D)) / @belapsed(norm($SD)),        @belapsed(det($A)) / @belapsed(det($SA)),        @belapsed(inv($A)) / @belapsed(inv($SA)),        @belapsed($A \ $D) / @belapsed($SA \ $SD),        @belapsed(eigen($A)) / @belapsed(eigen($SA)),        @belapsed(map(abs, $A)) / @belapsed(map(abs, $SA)),        @belapsed(sum($D)) / @belapsed(sum($SD)),        @belapsed(sort($D)) / @belapsed(sort($SD)),    ]    return speedupendfunction main()    benchmarks = [        "Addition",        "Addition (in-place)",        "Multiplication",        "Multiplication (in-place)",        "L2 Norm",        "Determinant",        "Inverse",        "Linear Solve (A \\ b)",        "Symmetric Eigendecomposition",        "`map`",        "Sum of Elements",        "Sorting",    ]    N = [3, 5, 10, 30]    speedups = map(run_benchmarks, N)    fmt_header = Printf.Format("%-$(maximum(length.(benchmarks)))s" * " | %7s"^length(N))    header = Printf.format(fmt_header, "Benchmark", string.("N = ", N)...)    println(header)    println("="^length(header))    fmt = Printf.Format("%-$(maximum(length.(benchmarks)))s" * " | %7.1f"^length(N))    for i = 1:length(benchmarks)        println(Printf.format(fmt, benchmarks[i], getindex.(speedups, i)...))    endendmain()

Notice that all the functions calledwhen creating the array speedupin run_benchmarksare the same whether using Arrays or StaticArrays,illustrating that StaticArraysare drop-in replacements for standard Arrays.

Running the above codeprints the following results on my laptop(the numbers indicate the speedupof StaticArrays over normal Arrays;e.g., a value of 17.7 meansusing StaticArrays was 17.7 times fasterthan using Arrays):

Benchmark                    |   N = 3 |   N = 5 |  N = 10 |  N = 30====================================================================Addition                     |    17.7 |    14.5 |     7.9 |     2.0Addition (in-place)          |     1.6 |     1.3 |     1.4 |     0.7Multiplication               |     8.2 |     7.0 |     4.2 |     2.6Multiplication (in-place)    |     1.9 |     5.9 |     3.0 |     1.0L2 Norm                      |     4.2 |     4.0 |     5.4 |     9.7Determinant                  |    66.6 |     2.5 |     1.3 |     0.9Inverse                      |    54.8 |     5.9 |     1.8 |     0.9Linear Solve (A \ b)         |    65.5 |     3.7 |     1.8 |     0.9Symmetric Eigendecomposition |     3.7 |     1.0 |     1.0 |     1.0`map`                        |    10.6 |     8.2 |     4.9 |     2.1Sum of Elements              |     1.5 |     1.1 |     1.7 |     2.1Sorting                      |     7.1 |     2.9 |     1.5 |     1.1

There are two main conclusions from this table.First,using StaticArrays instead of Arrayscan result in some nice speed-ups!Second,the gains from using StaticArrays tend to diminishas the sizes of the arrays increase.So,you can’t expect StaticArrays.jlto always magically make your code faster,but if your arrays are small enough(the recommendation being fewer than about 100 elements)then you can expect to see some good speed-ups.

Of course,the above code timed just individual operations;how much faster a particular application would beis a different matter.

For example,consider a physical simulationwhere many 3D vectorsare manipulated over several time steps.Since 3D vectors are static in size(i.e., are 1D arrays with exactly three elements),such a situation is a prime exampleof where StaticArrays.jl is useful.To illustrate,here is an example(taken from the field of magnetic resonance imaging)of a physical simulationusing Arrays vs using StaticArrays:

using BenchmarkTools, StaticArrays, LinearAlgebrafunction sim_arrays(N)    M = Matrix{Float64}(undef, 3, N)    M[1,:] .= 0.0    M[2,:] .= 0.0    M[3,:] .= 1.0    M2 = similar(M)    (sin, cos) = sincosd(30)    R = [1 0 0; 0 cos sin; 0 -sin cos]    E1 = exp(-0.01)    E2 = exp(-0.1)    (sin, cos) = sincosd(1)    F = [E2 * cos E2 * sin 0; -E2 * sin E2 * cos 0; 0 0 E1]    FR = F * R    C = [0, 0, 1 - E1]    # Run for 100 time steps (each loop iteration does 2 time steps).    for t = 1:50        mul!(M2, FR, M)        M2 .+= C        mul!(M, FR, M2)        M .+= C    end    total = sum(M; dims = 2)    return complex(total[1], total[2])endfunction sim_staticarrays(N)    M = fill(SVector(0.0, 0.0, 1.0), N)    (sin, cos) = sincosd(30)    R = @SMatrix [1 0 0; 0 cos sin; 0 -sin cos]    E1 = exp(-0.01)    E2 = exp(-0.1)    (sin, cos) = sincosd(1)    F = @SMatrix [E2 * cos E2 * sin 0; -E2 * sin E2 * cos 0; 0 0 E1]    FR = F * R    C = @SVector [0, 0, 1 - E1]    # Run for 100 time steps (each loop iteration does 1 time step).    for t = 1:100        # Apply simulation dynamics to each 3D vector.        for i = 1:length(M)            M[i] = FR * M[i] + C        end    end    total = sum(M)    return complex(total[1], total[2])endfunction main(N)    r1 = @btime sim_arrays($N)    r2 = @btime sim_staticarrays($N)    @assert r1  r2 # Make sure the results are the same.end

The speed-ups on my laptopfor different values of Nwere as follows:

  • N = 10: 14.6x faster
  • N = 100: 7.1x faster
  • N = 1000: 5.2x faster

(Here, N is the number of 3D vectors in the simulation,not the size of the StaticArrays.)

Note also that I wrote sim_arraysto be as performant as possibleby doing in-place operations(like mul!),which has the unfortunate side effectof making the code a bit harder to read.Therefore,sim_staticarrays is both faster and easier to read!

As another exampleof how StaticArrays.jlcan speed up a more involved application,see the DifferentialEquations.jl docs.

Summary

In this post,we discussed StaticArrays.jl.We saw that StaticArrays are drop-in replacementsfor regular Julia Arrays.We also saw that using StaticArrayscan result in some nice speed-upsover using Arrays,at least when the sizes of the arraysare not too big.

Are array operations a bottleneck in your code?Try out StaticArrays.jland then comment below how it helps!

Additional Links

Cover image background fromhttps://openverse.org/image/875bf026-11ef-47a8-a63c-ee1f1877c156?q=circuit%20board%20array.

Efficient Julia Optimization through an MRI Case Study

By: Steven Whitaker

Re-posted from: https://glcs.hashnode.dev/optimization-mri

Welcome to our firstJulia for Devsblog post!This will be a continuously running series of postswhere our team will discusshow we have used Juliato solve real-life problems.So, let’s get started!

In this Julia for Devs post,we will discuss using Juliato optimize scan parametersin magnetic resonance imaging (MRI).

If you are interestedjust in seeing example codeshowing how to use Juliato optimize your own cost function,feel free to skip ahead.

While we will discuss MRI specifically,the concepts(and even some of the code)we will seewill be applicablein many situations where optimization is useful,particularly when there is a modelfor an observable signal.The model could be, e.g.,a mathematical equationor a trained neural network,and the observable signal(aka the output of the model)could be, e.g.,the image intensity of a medical imaging scanor the energy needed to heat a building.

Problem Setup

In this post,the signal model we will useis a mathematical equationthat gives the image intensityof a balanced steady-state free precession (bSSFP) MRI scan.This model has two sets of inputs:those that are user-definedand those that are not.In MRI,user-defined inputs are scan parameters,such as how long to run the scan foror how much power to use to generate MRI signal.The other input parameters are called tissue parametersbecause they are properties that are intrinsicto the imaged tissue (e.g., muscle, brain, etc.).

Tissue parameters often are assumedto take on a pre-defined valuefor each specific tissue type.However,because they are tissue-specificand can vary with health and age,sometimes tissue parameters are considered to be unknown valuesthat need to be estimated from data.

The problem we will discuss in this postis how to estimate tissue parametersfrom a set of bSSFP scans.Then we will discusshow to optimize the scan parametersof the set of bSSFP scansto improve the tissue parameter estimates.

To estimate tissue parameters,we need the following:

  1. a signal model, and
  2. an estimation algorithm.

Signal Model

Here’s the signal model:

# Reference: https://www.mriquestions.com/true-fispfiesta.htmlfunction bssfp(TR, flip, T1, T2)    E1 = exp(-TR / T1)    E2 = exp(-TR / T2)    (sin, cos) = sincosd(flip)    num = sin * (1 - E1)    den = 1 - cos * (E1 - E2) - E1 * E2    signal = num / den    return signalend

This function returns the bSSFP signalas a function of two scan parameters(TR, a value in units of seconds,and flip, a value in units of degrees)and two tissue parameters(T1 and T2, each a value in units of seconds).

Estimation Algorithm

There are various estimation algorithms out there,but, for simplicity in this post,we will stick with a grid search.We will compute the bSSFP signalover many pairs of T1 and T2 values.We will compare these computed signalsto the actual observed signal,and the pair of T1 and T2 valuesthat results in the closest matchwill be chosen as the estimates of the tissue parameters.Here’s the algorithm in code:

# `signal` is the observed signal.# `TR` and `flip` are the scan parameters that were used,# and we want to estimate `T1` and `T2`.# `signal`, `TR` and `flip` should be vectors of the same length,# representing running multiple scans# and recording the observed signal for each pair of `TR` and `flip` values.function gridsearch(signal, TR, flip)    # Specify the grid of values to search over.    # `T1_grid` and `T2_grid` could optionally be inputs to this function.    T1_grid = exp10.(range(log10(0.5), log10(3), 40))    T2_grid = exp10.(range(log10(0.005), log10(0.7), 40))    T1_est = T1_grid[1]    T2_est = T2_grid[1]    best = Inf    # Pre-allocate memory for some computations to speed up the following loop.    residual = similar(signal)    # Iterate over the Cartesian product of `T1_grid` and `T2_grid`.    for (T1, T2) in Iterators.product(T1_grid, T2_grid)        # Physical constraint: T1 is greater than T2, and both are positive.        T1 > T2 > 0 || continue        # For the given T1 and T2 pairs, compute the (noiseless) bSSFP signal        # for each bSSFP scan and subtract them from the given signals.        # Tip: In Julia, one can apply a scalar function elementwise on vector        #      inputs using the "dot" notation (see the `.` after `bssfp` below).        residual .= signal .- bssfp.(TR, flip, T1, T2)        # Compute the norm squared of the above difference.        err = residual' * residual        # If the candidate T1 and T2 pair fit the given signals better,        # keep them as the current estimate of T1 and T2.        if err < best            best = err            T1_est = T1            T2_est = T2        end    end    isinf(best) && error("no valid grid points; consider changing `T1_grid` and/or `T2_grid`")    return (T1_est, T2_est)end

Cost Function

Now,to optimize scan parameterswe need a function to optimize,also called a cost function.In this case,we want to minimize the errorbetween what our estimator tells usand the true tissue parameter values.But because we need to optimize scan parametersbefore running any scans,we will simulate bSSFP MRI scansusing the given sets of scan parametersand average the estimator errorover several sets of tissue parameters.Here’s the code for the cost function:

# Load the Statistics standard library package to get access to the `mean` function.using Statistics# `TR` and `flip` are vectors of the same length# specifying the pairs of scan parameters to use.function estimator_error(TR, flip)    # Specify the grid of values to average over and the noise level to simulate.    # For a given pair of `T1_true` and `T2_true` values,    # bSSFP signals will be simulated, and then the estimator    # will attempt to estimate what `T1_true` and `T2_true` values were used    # based on the simulated bSSFP signals and the given scan parameters.    T1_true = [0.8, 1.0, 1.5]    T2_true = [0.05, 0.08, 0.1]    noise_level = 0.01    T1_avg = mean(T1_true)    T2_avg = mean(T2_true)    # Pre-allocate memory for some computations to speed up the following loop.    signal = float.(TR)    # The following computes the mean estimator error over all pairs    # of true T1 and T2 values.    return mean(Iterators.product(T1_true, T2_true)) do (T1, T2)        # Ignore pairs for which `T2 > T1`.        T2 > T1 && return 0        # Simulate noisy signal using the true T1 and T2 values.        signal .= bssfp.(TR, flip, T1, T2) .+ noise_level .* randn.()        # Estimate T1 and T2 from the noisy signal.        (T1_est, T2_est) = gridsearch(signal, TR, flip)        # Compare estimates to truth.        T1_err = (T1_est - T1)^2        T2_err = (T2_est - T2)^2        # Combine the T1 and T2 errors into one single error metric        # by averaging the respective errors        # after normalizing by the mean true value of each parameter.        err = (T1_err / T1_avg + T2_err / T2_avg) / 2    endend

Optimization

Now we are finally ready to optimize!We will use Optimization.jl.Many optimizers are available,but because we have a non-convex optimization problem,we will use the adaptive particle swarm global optimization algorithm.Here’s a function that does the optimization:

# Load needed packages.# Optimization.jl provides the optimization infrastructure,# while OptimizationOptimJL.jl wraps the Optim.jl package# that provides the optimization algorithm we will use.using Optimization, OptimizationOptimJL# Load the Random standard library package to enable setting the random seed.using Random# `TR_init` and `flip_init` are vectors of the same length# that provide a starting point for the optimization algorithm.# The length of these vectors also determines the number of bSSFP scans to simulate.function scan_optimization(TR_init, flip_init)    # Ensure randomly generated noise is the same for each evaluation.    Random.seed!(0)    N_scans = length(TR_init)    length(flip_init) == N_scans || error("`TR_init` and `flip_init` have different lengths")    # The following converts the cost function we created (`estimator_error`)    # into a form that Optimization.jl can use.    # Specifically, Optimization.jl needs a cost function that takes two inputs    # (the first of which contains the parameters to optimize)    # and returns a real number.    # The input `x` is a concatenation of `TR` and `flip`,    # i.e., `TR == x[1:N_scans]` and `flip == x[N_scans+1:end]`.    # The input `p` is unused, but is needed by Optimization.jl.    cost_fun = (x, p) -> estimator_error(x[1:N_scans], x[N_scans+1:end])    # Specify constraints.    # The lower and upper bounds are chosen to ensure reasonable bSSFP scans.    (min_TR, max_TR) = (0.001, 0.02)    (min_flip, max_flip) = (1, 90)    constraints = (;        lb = [fill(min_TR, N_scans); fill(min_flip, N_scans)],        ub = [fill(max_TR, N_scans); fill(max_flip, N_scans)],    )    # Set up and solve the problem.    f = OptimizationFunction(cost_fun)    prob = OptimizationProblem(f, [TR_init; flip_init]; constraints...)    sol = solve(prob, ParticleSwarm(lower = prob.lb, upper = prob.ub, n_particles = 3))    # Extract the optimized TRs and flip angles, remembering that `sol.u == [TR; flip]`.    TR_opt = sol.u[1:N_scans]    flip_opt = sol.u[N_scans+1:end]    return (TR_opt, flip_opt)end

Results

Let’s see how scan parameter optimizationcan improve tissue parameter estimates.We will use the following functionto take scan parameters,simulate bSSFP scans,and then estimate the tissue parametersfrom those scans.We will compare the results of this functionwith manually chosen scan parametersto those with optimized scan parameters.

# Load the LinearAlgebra standard library package for access to `norm`.using LinearAlgebra# Load Plots.jl for displaying the results.using Plots# Helper function for plotting.function plot_true_vs_est(T_true, T_est, title, rmse, clim)    return heatmap([T_true T_est];        title,        clim,        xlabel = "RMSE = $(round(rmse; digits = 4)) s",        xticks = [],        yticks = [],        showaxis = false,        aspect_ratio = 1,    )endfunction run(TR, flip)    # Create a synthetic object to scan.    # The background of the object will be indicated with values of `0.0`    # for the tissue parameters.    nx = ny = 128    object = map(Iterators.product(range(-1, 1, nx), range(-1, 1, ny))) do (x, y)        r = hypot(x, y)        if r < 0.5            return (0.8, 0.08)        elseif r < 0.8            return (1.3, 0.09)        else            return (0.0, 0.0)        end    end    # Simulate the bSSFP scans.    T1_true = first.(object)    T2_true = last.(object)    noise_level = 0.001    signal = map(T1_true, T2_true) do T1, T2        # Ignore the background of the object.        (T1, T2) == (0.0, 0.0) && return 0.0        bssfp.(TR, flip, T1, T2) .+ noise_level .* randn.()    end    # Estimate the tissue parameters.    T1_est = zeros(nx, ny)    T2_est = zeros(nx, ny)    for i in eachindex(signal, T1_est, T2_est)        # Don't try to estimate tissue parameters for the background.        signal[i] == 0.0 && continue        (T1_est[i], T2_est[i]) = gridsearch(signal[i], TR, flip)    end    # Compute the root mean squared error.    m = T1_true .!= 0.0    T1_rmse = sqrt(norm(T1_true[m] - T1_est[m]) / count(m))    T2_rmse = sqrt(norm(T2_true[m] - T2_est[m]) / count(m))    # Plot the results.    p_T1 = plot_true_vs_est(T1_true, T1_est, "True vs Estimated T1", T1_rmse, (0, 2.5))    p_T2 = plot_true_vs_est(T2_true, T2_est, "True vs Estimated T2", T2_rmse, (0, 0.25))    return (p_T1, p_T2)end

First, let’s see the results with no optimization:

TR_init = [0.005, 0.005, 0.005]flip_init = [30, 60, 80](p_T1_init, p_T2_init) = run(TR_init, flip_init)
display(p_T1_init)

T1 estimates using initial scan parameters

display(p_T2_init)

T2 estimates using initial scan parameters

Now let’s optimize the scan parametersand then see how the tissue parameter estimates look:

(TR_opt, flip_opt) = scan_optimization(TR_init, flip_init)(p_T1_opt, p_T2_opt) = run(TR_opt, flip_opt)
display(p_T1_opt)

T1 estimates using optimized scan parameters

display(p_T2_opt)

T2 estimates using optimized scan parameters

We can see that the optimized scansresult in much better tissue parameter estimates!

Summary

In this post,we saw how Julia can be usedto optimize MRI scan parametersto improve tissue parameter estimates.Even though we discussed MRI specifically,the concepts presented hereeasily extend to other applicationswhere signal models are knownand optimization is required.

Note that most of the code for this postwas taken from a Dash appI helped create for JuliaHub.Feel free to check it outif you want to see this code in action!

Additional Links