Category Archives: Julia

WAW2024 conference: June 3-6, 2024

By: Blog by Bogumił Kamiński

Re-posted from: https://bkamins.github.io/julialang/2024/05/31/waw2024.html

Introduction

Next week I organize WAW2024 conference. The event covers various aspects of theoretical and applied modeling of networks.

As an introduction I want to run a simulation of an example problem. Consider a random graph with a probability of an edge between two nodes equal to p. Next, assume that we pick an edge uniformly at random from this graph and then remove two nodes forming this edge from the graph as matched. The question is what is the expected fraction of nodes that are going to be matched by this process.

Today, I will investigate this problem using simulation.

The post was written under Julia 1.10.1, Graphs.jl 1.11.0, and DataFrames.jl 1.6.1.

The simulation

Here is a simulator of our greedy matching process. In the simulation we traverse all edges of the graph in a random order.
In the matched vector we keep track of which nodes have been already matched.

using Graphs
using Random
using Statistics

function run_sim(n::Integer, p::Real)
    g = erdos_renyi(n, p)
    matched = fill(false, n)
    for e in shuffle!(collect(edges(g)))
        n1, n2 = e.src, e.dst
        if !(matched[n1] || matched[n2])
            matched[n1] = true
            matched[n2] = true
        end
    end
    return mean(matched)
end

The experiment

Let us now test our simulator for a graph on 10000 nodes and p varying from 0.00001 to 0.1 (on logarithmic scale).

julia> using DataFrames

julia> df = DataFrame(p=Float64[], rep=Int[], res=Float64[])
0×3 DataFrame
 Row │ p        rep    res
     │ Float64  Int64  Float64
─────┴─────────────────────────

julia> ps = [10.0^i for i in -5:-1]
5-element Vector{Float64}:
 1.0e-5
 0.0001
 0.001
 0.010000000000000002
 0.1

julia> Random.seed!(1234);

julia> @time for p in ps, rep in 1:16
           push!(df, (p, rep, run_sim(10_000, p)))
       end
 79.190585 seconds (438.02 M allocations: 14.196 GiB, 7.13% gc time, 0.36% compilation time)

julia> df
80×3 DataFrame
 Row │ p        rep    res
     │ Float64  Int64  Float64
─────┼─────────────────────────
   1 │  1.0e-5      1   0.0948
   2 │  1.0e-5      2   0.094
   3 │  1.0e-5      3   0.097
   4 │  1.0e-5      4   0.0892
   5 │  1.0e-5      5   0.0848
   6 │  1.0e-5      6   0.093
  ⋮  │    ⋮       ⋮       ⋮
  76 │  0.1        12   0.9992
  77 │  0.1        13   0.999
  78 │  0.1        14   0.999
  79 │  0.1        15   0.999
  80 │  0.1        16   0.9988
                69 rows omitted

The simulation took a bit over 1 minute, mainly due to the p=0.1 case which generates a lot of edges in the graph.
Let us aggregate the obtained data to get the mean and standard error, and range of the results over all values of p:

julia> combine(groupby(df, "p"),
               "p" => (x -> 10_000 * first(x)) => "mean_degree",
               "res" => mean,
               "res" => (x -> std(x) / sqrt(length(x))) => "res_se",
               "res" => extrema)
5×5 DataFrame
 Row │ p        mean_degree  res_mean  res_se       res_extrema
     │ Float64  Float64      Float64   Float64      Tuple…
─────┼───────────────────────────────────────────────────────────────
   1 │  1.0e-5          0.1  0.090975  0.000842986  (0.0848, 0.097)
   2 │  0.0001          1.0  0.499888  0.00190971   (0.4848, 0.5134)
   3 │  0.001          10.0  0.909425  0.000523729  (0.9062, 0.9126)
   4 │  0.01          100.0  0.990162  0.000257694  (0.9888, 0.992)
   5 │  0.1          1000.0  0.999     5.47723e-5   (0.9986, 0.9994)

We can see that the sharp increase of fraction of matched nodes happens around mean degree of 1 in the graph.
Additionally we see that even for high p we do not match every node in the greedy matching process.
Finally the obtained results are relatively well concentrated around the mean.

Conclusions

If you want to see how this problem can be solved analytically I recommend you to read this paper.
Using the formulas derived there we can compare our simulation results with the asymptotic theory:

julia> (10_000 .* ps) ./ (10_000 .* ps .+ 1)
5-element Vector{Float64}:
 0.09090909090909091
 0.5
 0.9090909090909091
 0.9900990099009901
 0.999000999000999

Indeed we see that the match is quite good.

If such problems are interesting for you I invite you to join us during WAW2024 conference.

CUDA.jl 5.4: Memory management mayhem

By: Tim Besard

Re-posted from: https://juliagpu.org/post/2024-05-28-cuda_5.4/index.html

CUDA.jl 5.4 comes with many memory-management related changes that should improve performance of memory-heavy applications, and make it easier to work with heterogeneous set-ups involving multiple GPUs or using both the CPU and GPU.

Before anything else, let's get the breaking changes out of the way. CUDA.jl v5.4 only bumps the minor version, so it should be compatible with existing codebases. However, there are a couple of API changes that, although covered by appropriate deprecation warnings, applications should be updated to:

  • The CUDA.Mem submodule has been removed. All identifiers have been moved to the parent CUDA submodule, with a couple being renamed in the process:

    • Mem.Device and Mem.DeviceBuffer have been renamed to CUDA.DeviceMemory (the same applies to Mem.Host and Mem.Unified);

    • enums from the Mem submodule have gained a MEM suffix, e.g., Mem.ATTACH_GLOBAL has been renamed to CUDA.MEM_ATTACH_GLOBAL;

    • Mem.set! has been renamed to CUDA.memset;

    • Mem.info() has been renamed to CUDA.memory_info();

  • CUDA.memory_status() has been renamed to CUDA.pool_status();

  • CUDA.available_memory() has been renamed to CUDA.free_memory().

The meat of this release is in the memory management improvements detailed below. These changes can have a significant impact of the performance of your application, so it's recommended to thoroughly test your application after upgrading!

Eager garbage collection

Julia is a garbage collected language, which means that (GPU) allocations can fail because garbage has piled up, necessitating a collection cycle. Previous versions of CUDA.jl handled this at the allocation site, detecting out-of-memory errors and triggering the GC. This was not ideal, as it could lead to significant pauses and a bloated memory usage.

To improve this, CUDA.jl v5.4 more accurately keeps track of memory usage, and uses that information to trigger the GC early at appropriate times, e.g., when waiting for a kernel to finish. This should lead to more predictable performance, both by distributing the cost of garbage collection over time and by potentially masking it behind other operations.

For example, the following toy model implemented with Flux.jl allocates a ton of memory:

using CUDA, Flux
using MLUtils: DataLoadern_obs = 300_000
n_feature = 1000
X = rand(n_feature, n_obs)
y = rand(1, n_obs)
train_data = DataLoader((X, y) |< gpu; batchsize = 2048, shuffle=false)model = Dense(n_feature, >) |< gpu
loss(m, _x, _y) = Flux.Losses.mse(m(_x), _>)
opt_state = Flux.setup(Flux.Adam(), model)
Flux.train!(loss, model, train_data, opt_state)
for epoch in 1:100
  Flux.train!(loss, model, train_data, opt_state)
end

Without eager garbage collection, this leads to expensive pauses while freeing a large amount of memory at every epoch. We can simulate this by artificially limiting the memory available to the GPU, while also disabling the new eager garbage collection feature by setting the JULIA_CUDA_GC_EARLY environment variable to false (this is a temporary knob that will be removed in the future, but may be useful now for evaluating the new feature):

❯ JULIA_CUDA_GC_EARLY=false JULIA_CUDA_HARD_MEMORY_LIMIT=4GiB \
  julia --project train.jl
...
[ Info: Epoch 90 train time 0.031s
retry_reclaim: freed 2.865 GiB
[ Info: Epoch 91 train time 0.031s
[ Info: Epoch 92 train time 0.027s
retry_reclaim: freed 2.865 GiB
[ Info: Epoch 93 train time 0.03s
retry_reclaim: freed 2.873 GiB
[ Info: Epoch 94 train time 0.031s
retry_reclaim: freed 2.873 GiB
[ Info: Epoch 95 train time 0.03s
retry_reclaim: freed 2.873 GiB
[ Info: Epoch 96 train time 0.031s
[ Info: Epoch 97 train time 0.027s
retry_reclaim: freed 2.873 GiB
[ Info: Epoch 98 train time 0.031s
retry_reclaim: freed 2.865 GiB
[ Info: Epoch 99 train time 0.031s
retry_reclaim: freed 2.865 GiB
[ Info: Epoch 100 train time 0.031s
[ Info: Total time 4.307s

With eager garbage collection enabled, more frequent but less costly pauses result in significantly improved performance:

❯ JULIA_CUDA_GC_EARLY=true JULIA_CUDA_HARD_MEMORY_LIMIT=4GiB \
  julia --project wip.jl
...
[ Info: Epoch 90 train time 0.031s
maybe_collect: collected 1.8 GiB
maybe_collect: collected 1.8 GiB
[ Info: Epoch 91 train time 0.033s
maybe_collect: collected 1.8 GiB
[ Info: Epoch 92 train time 0.031s
maybe_collect: collected 1.8 GiB
[ Info: Epoch 93 train time 0.031s
maybe_collect: collected 1.8 GiB
[ Info: Epoch 94 train time 0.03s
maybe_collect: collected 1.8 GiB
[ Info: Epoch 95 train time 0.03s
maybe_collect: collected 1.8 GiB
maybe_collect: collected 1.8 GiB
[ Info: Epoch 96 train time 0.033s
maybe_collect: collected 1.8 GiB
[ Info: Epoch 97 train time 0.03s
maybe_collect: collected 1.8 GiB
[ Info: Epoch 98 train time 0.03s
maybe_collect: collected 1.8 GiB
[ Info: Epoch 99 train time 0.03s
maybe_collect: collected 1.8 GiB
[ Info: Epoch 100 train time 0.03s
[ Info: Total time 3.76s

Eager garbage collection is driven by a heuristic that considers the current memory pressure, how much memory was freed during previous collections, and how much time that took. It is possible that the current implementation is not optimal, so if you encounter performance issues, please file an issue.

Tracked memory allocations

When working with multiple GPUs, it is important to differentiate between the device that memory was allocated on, and the device used to execute code. Practically, this meant that users of CUDA.jl had to manually remember that allocating and using CuArray objects (typically) needed to happen with the same device active. The same is true for streams, which are used to order operations executing on a single GPU.

To improve this, CUDA.jl now keeps track of the device that owns the memory, and the stream last used to access it, enabling the package to "do the right thing" when using that memory in kernels or with library functionality. This does not mean that CUDA.jl will automatically switch the active device: We want to keep the user in control of that, as it often makes sense to access memory from another device, if your system supports it.

Let's break down what the implications are of this change.

1. Using multiple GPUs

If you have multiple GPUs, it may be possible that direct P2P access between devices is possible (e.g., using NVLink, or just over PCIe). In this case, CUDA.jl will now automatically configure the system to allow such access, making it possible to seamlessly use memory allocated on one device in kernels executing on a different device:

julia> # Allocate memory on device 0
       device!(0)
CuDevice(0): Tesla V100-PCIE-16GB
julia> a = CuArray([1]);julia> # Use on device 1
       device!(1)
CuDevice(1): Tesla V100S-PCIE-32GB
julia> a .+ 1;

If P2P access between devices is not possible, CUDA.jl will now raise an error instead of throwing an illegal memory access error as it did before:

julia> # Use on incompatible device 2
       device!(2)
CuDevice(2): NVIDIA GeForce GTX 1080 Ti
julia> a .+ 1
ERROR: cannot take the GPU address of inaccessible device memory.You are trying to use memory from GPU 0 on GPU 2.
P2P access between these devices is not possible;
either switch to GPU 0 by calling `CUDA.device!(0)`,
or copy the data to an array allocated on device 2.

As the error message suggests, you can always copy memory between devices using the copyto! function. In this case, CUDA.jl will fall back to staging the copy on the host when P2P access is not possible.

2. Using multiple streams

Streams are used to order operations executing on a single GPU. In CUDA.jl, every Julia task has its own stream, making it very easy to group independent operations together, and make it possible for the GPU to potentially overlap execution of these operations.

Before CUDA.jl v5.4, users had to be careful about synchronizing data used in multiple tasks. It was recommended, for example, to end every data-producing task with an explicit call to synchronize(), or alternatively make sure to device_synchronize() at the start of a data-consuming task. Now that CUDA.jl keeps track of the stream used to last access memory, it can automatically synchronize streams when needed:

# Allocate some data
a = CUDA.zeros(4096, 4096)
b = CUDA.zeros(4096, 4096)
#synchronize()  # No longer needed# Perform work on a task
t = @async begin
  a * b
  #synchronize()  # No longer needed
end# Fetch the results
c = fetch(t)

3. Using capturing APIs

All of the above is implemented by piggybacking on the function that converts memory objects to pointers, in the assumption that this will be the final operation before the memory is used. This is generally true, with one important exception: APIs that capture memory. For example, when recording an operation using the CUDA graph APIs, a memory address may be captured and used later without CUDA.jl being aware of it.

CUDA.jl accounts for this by detecting conversions during stream capture, however, some APIs may not covered yet. If you encounter issues with capturing APIs, let us know, and keep using additional synchronization calls to ensure correctness.

Unified memory iteration

Unified memory is a feature of CUDA that allows memory to be accessed from both the CPU and the GPU. We have now greatly improved the performance of using unified memory with CPU code that iterates over elements of a CuArray. Although this is typically unwanted, triggering the dreaded "scalar indexing" error when accessing device memory in such a way, it can be useful when incrementaly porting code to the GPU.

Concretely, accessing elements of a unified CuArray on the CPU is much faster now:

julia> # Reference
       a = [1];
julia> @btime $a[];
  1.959 ns (0 allocations: 0 bytes)julia> b = cu(a; unified=true);julia> # Before
       @btime $b[]
  2.617 μs (0 allocations: 0 bytes);julia> # After
       @btime $b[];
  4.140 ns (0 allocations: 0 bytes)

Notice the different unit! This has a massive impact on real-life performance, for example, as demonstrated by calling foldl which does not have a GPU-optimized implementation:

julia> a = cu(rand(1024, 1024); unified=true);julia> # Before
       @b foldl(+, a)
4.210 s (9 allocs: 208 bytes, without a warmup)julia> # After
       @b foldl(+, a)
3.107 ms (9 allocs: 208 bytes)

For completeness, doing this with regular device memory triggers a scalar indexing error:

julia> a = cu(rand(1024, 1024));julia> foldl(+, a)
ERROR: Scalar indexing is disallowed.

These changes should make it easier to port applications to the GPU by incrementally moving parts of the codebase to the GPU without having to worry about the performance of accessing memory from the CPU. The only requirement is to use unified memory, e.g., by calling cu with unified=true, or setting the CUDA.jl preference default_memory to use unified memory by default. However, as unified memory comes with a slight cost, and results in synchronous allocation behavior, it is still recommended to switch back to regular device memory when your application has been fully ported to the GPU.

Other changes

To keep this post from becoming even longer, a quick rundown of other changes:

  • @wsmoses introduced initial support for automatic differentiation of heterogeneous host/device code using Enzyme.jl. Before, you would have to differentiate through host and device code separately, and manually set up rules for crossing the host/device boundary. Now, you can differentiate through entire applications with ease;

  • CUDA.@profile now automatically detects external profilers, so it should not be required to specify external=true anymore when running under NSight;

  • Exception output has been improved, only reporting a single error message instead of generating output on each thread, and better forwarding the exception type;

  • Cached handles from libraries will now be freed when under memory pressure;

  • Tegra devices are now supported by our artifacts, obviating the use of a local toolkit;

  • Support for CUDA 12.5 has been added, as well as initial support for Julia 1.12.

A simple coin-tossing game

By: Blog by Bogumił Kamiński

Re-posted from: https://bkamins.github.io/julialang/2024/05/24/probability.html

Introduction

I have been writing my blog for over 4 years now (without missing a single week).
My first post was on May 10, 2020, you can find it here.

There is a small change in how I distribute my content. Starting from last week I made the repository of my blog public,
so if you find any mistake please do not hesitate to open a Pull Request here.

To celebrate this I decided to go back to my favorite topic – mathematical puzzles.
Today I use a classic coin-tossing game example

The post was written under Julia 1.10.1 and StatsBase.jl 0.34.4, and FreqTables.jl 0.4.6, BenchmarkTools.jl 1.5.0.

The problem

Assume Alice and Bob toss a fair coin. Alice wins if after tossing a head (H) tail (T) is tossed, that is we see a HT sequence.
Bob wins if two consecutive heads are tossed, that is we see a HH sequence.

The questions are:

  • Who is more likely to win this game?
  • If only Alice played, how long, on the average, would she wait till HT was tossed?
  • If only Bob played, how long, on the average, would she wait till HH was tossed?

Let us try to answer these questions using Julia.

Both players play

This code simulates the situation when Alice and Bob play together:

function both()
    a = rand(('H', 'T'))
    while true
        b = rand(('H', 'T'))
        if a == 'T'
            a = b
        else
            return b == 'H' ? "Bob" : "Alice"
        end
    end
end

The both function returns "Bob" if Bob wins, and "Alice" otherwise.
From the code it should be already clear that both players have the same probability of winning.
The only way to terminate the simulation is return b == 'H' ? "Bob" : "Alice" and this condition is symmetric
with respect to Alice and Bob. Let us confirm this by running a simulation:

julia> using FreqTables, Random

julia> Random.seed!(1234);

julia> freqtable([both() for _ in 1:100_000_000])
2-element Named Vector{Int64}
Dim1  │
──────┼─────────
A     │ 50000012
B     │ 49999988

Indeed, the number of times Alice and Bob win seem to be the same.

Alice’s waiting time

Now let us check how long, on the average, Alice has to wait to see the HT sequence. Here is Alice’s simulator:

function alice()
    a = rand(('H', 'T'))
    i = 1
    while true
        b = rand(('H', 'T'))
        i += 1
        a == 'H' && b == 'T' && return i
        a = b
    end
end

Let us check it:

julia> using StatsBase

julia> describe([alice() for _ in 1:100_000_000])
Summary Stats:
Length:         100000000
Missing Count:  0
Mean:           3.999890
Std. Deviation: 2.000032
Minimum:        2.000000
1st Quartile:   2.000000
Median:         3.000000
3rd Quartile:   5.000000
Maximum:        31.000000
Type:           Int64

So it seems that, in expectation, Alice finishes her game in 4 tosses.
Can we expect the same for Ben (as we remember – if they play together they have the same chances of finishing first)?
Let us see.

Alice’s waiting time

Now let us check how long, on the average, Bob has to wait to see the HH sequence. Here is Bob’s simulator:

function bob()
    a = rand(('H', 'T'))
    i = 1
    while true
        b = rand(('H', 'T'))
        i += 1
        a == 'H' && b == 'H' && return i
        a = b
    end
end

Let us check it:

julia> describe([bob() for _ in 1:100_000_000])
Summary Stats:
Length:         100000000
Missing Count:  0
Mean:           5.999915
Std. Deviation: 4.690177
Minimum:        2.000000
1st Quartile:   2.000000
Median:         5.000000
3rd Quartile:   8.000000
Maximum:        87.000000
Type:           Int64

To our surprise, Bob needs 6 coin tosses, on the average, to see HH.

What is the reason of this difference? Assume we have just tossed H. Start with Bob. If we hit H we finish. If we hit T we then need to wait till we see H again to be able to consider finishing.
However, if we are Alice if we hit T we finish, but if we hit H we do not have to wait for anything – we are already in a state that gives us a chance to finish the game in the next step.

Conclusions

The difference between joint games and separate games is a bit surprising and I hope you found it interesting if you have not seen this puzzle before.
Today I have approached this problem using simulation. However, it is easy to write down a Markov chain representation of all three scenarios and solve them analytically.
I encourage you to try doing this exercise.

PS.

In the code I use the rand(('H', 'T')) form to generate randomness. It is much faster than e.g. writing rand(["H", "T"]) (which would be a first instinct), for two reasons:

  • using Char instead of String is a more lightweight option;
  • using Tuple instead of Vector avoids allocations.

Let us see a comparison of timing (I cut out the histograms from the output):

julia> using BenchmarkTools

julia> @benchmark rand(('H', 'T'))
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
 Range (min … max):  1.900 ns … 233.700 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     2.500 ns               ┊ GC (median):    0.00%
 Time  (mean ± σ):   3.316 ns ±   2.772 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

 Memory estimate: 0 bytes, allocs estimate: 0.

julia> @benchmark rand(["H", "T"])
BenchmarkTools.Trial: 10000 samples with 999 evaluations.
 Range (min … max):  15.816 ns …  2.086 μs  ┊ GC (min … max): 0.00% … 96.30%
 Time  (median):     19.019 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   23.041 ns ± 60.335 ns  ┊ GC (mean ± σ):  9.44% ±  3.63%

 Memory estimate: 64 bytes, allocs estimate: 1.

In this case we could also just use rand(Bool) (as the coin is fair and has only two states):

julia> @benchmark rand(Bool)
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
 Range (min … max):  1.700 ns … 131.000 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     3.200 ns               ┊ GC (median):    0.00%
 Time  (mean ± σ):   3.218 ns ±   2.112 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

 Memory estimate: 0 bytes, allocs estimate: 0.

but as you can see rand(('H', 'T')) has a similar speed and leads to a much more readable code.