Author Archives: Blog by Bogumił Kamiński

Coin-tossing game: a numerical approach

By: Blog by Bogumił Kamiński

Re-posted from: https://bkamins.github.io/julialang/2024/06/14/probability3.html

Introduction

Today I decided to follow up on my last post solving a coin-tossing game.
This time, instead of simulation I want to use numerical approach
(and so probably a bit harder).

The post was written under Julia 1.10.1 and Graphs.jl 1.11.0.

The problem

Let me describe the setting of a game first (it is an extension of this post).

Assume Alice and Bob toss a fair coin. In each toss head (h) or tail (t) can show up with equal probability.

Alice and Bob choose some sequence of h and t they are waiting for. We assume that the chosen sequences have the same length and are different.
For example Alice could choose htht and Bob tthh.

The winner of the game is the person who saw their sequence first.

The question we ask if for a fixed sequence length n we can get cycles, that is, for example, that sequence s1 beats s2, s2 beats s3, and s3 beats s1.

To answer this question we will represent the game as a Markov process.

Step 1: non-terminating Markov chain

First we create a transition matrix of a Markov chain tracking current n element sequence in the game we consider.
Here is the code:

function markov(size::Integer)
    idx2states = vec(join.(Iterators.product([['h', 't'] for _ in 1:size]...)))
    states2idx = Dict(idx2states .=> eachindex(idx2states))
    P = zeros(2^size, 2^size)
    for state in idx2states
        for next in ("h", "t")
            nextstate = chop(state, head=1, tail=0) * next
            P[states2idx[state], states2idx[nextstate]] = 0.5
        end
    end
    return P, idx2states, states2idx
end

What we do in it is as follows:

  1. idx2states vector keeps track of all h and t sequences that have length n (i.e. it is a mapping from state number to state signature).
  2. states2idx is an inverse mapping – from state signature to state number.
  3. P is transition matrix of our chain. Note that from the sequence ab... (where all elements are h or t) we go to sequence b...h or b...t with equal probability.

Step 2: terminating Markov chain

We now need to create a function that is aware of Alice’s and Bob’s chosen sequences and make them terminating. We want to compute the probabilities of ending up in Alice’s and Bob’s state.
Here is the code:

function game(P, states2idx, alice, bob)
    P_game = copy(P)
    alice_idx, bob_idx = states2idx[alice], states2idx[bob]
    P_game[alice_idx, :] .= 0.0
    P_game[alice_idx, alice_idx] = 1.0
    P_game[bob_idx, :] .= 0.0
    P_game[bob_idx, bob_idx] = 1.0
    n = length(states2idx)
    terminal = fill(1 / n, 1, n) * P_game^(2^30)
    return terminal[states2idx[alice]], terminal[states2idx[bob]]
end

Note that we first update the P_game matrix to make alice_idx and bob_idx states terminating. Then, since I was lazy, we assume we make 2^30 steps of the process (fortunately in Julia it is fast).
Observe that initially all states are equally probably, so terminal matrix keeps information about long term probabilities of staying in all possible states.
We extract the probabilities of Alice’s and Bob’s states and return them.

Step 3: looking for cycles

We are now ready for a final move. We can consider all possible preferred sequences of Alice and Bob and create a graph that keeps track of which sequences beat other sequences:

using Graphs

function analyze_game(size::Integer, details::Bool=true)
    P, idx2states, states2idx = markov(size)
    g = SimpleDiGraph(length(states2idx))
    details && println("\nWinners:")
    for alice in idx2states, bob in idx2states
        alice > bob || continue
        alice_win, bob_win = game(P, states2idx, alice, bob)
        if alice_win > 0.51
            winner = "alice"
            add_edge!(g, states2idx[alice], states2idx[bob])
        elseif bob_win > 0.51
            winner = "bob"
            add_edge!(g, states2idx[bob], states2idx[alice])
        else
            winner = "tie (or close :))"
        end
        details && println(alice, " vs ", bob, ": ", winner)
    end
    cycles = simplecycles(g)
    if !isempty(cycles)
        min_len = minimum(length, cycles)
        filter!(x -> length(x) == min_len, cycles)
    end
    println("\nCycles:")
    for cycle in cycles
        println(idx2states[cycle])
    end
end

Note that I used 0.51 threshold for detection of dominance of one state over the other. We could do it better, but in practice for small n it is enough and working this way is simpler numerically.
What this threshold means is that we want to be “sure” that one player beats the other.
In our code we do two things:

  • optionally print the information which state beats which state;
  • print information about cycles found in beating patterns (we keep only cycles of the shortest length).

Let us check the code. Start with sequences of length 2:

julia> analyze_game(2)

Winners:
th vs hh: alice
th vs ht: tie (or close :))
ht vs hh: tie (or close :))
tt vs hh: tie (or close :))
tt vs th: tie (or close :))
tt vs ht: bob

Cycles:

We see that only th beats hh and ht beats tt (this is a symmetric case). We did not find any cycles.

Let us check 3:

julia> analyze_game(3)

Winners:
thh vs hhh: alice
thh vs hth: tie (or close :))
thh vs hht: alice
thh vs htt: tie (or close :))
hth vs hhh: alice
hth vs hht: bob
tth vs hhh: alice
tth vs thh: alice
tth vs hth: alice
tth vs hht: tie (or close :))
tth vs tht: alice
tth vs htt: bob
hht vs hhh: tie (or close :))
tht vs hhh: alice
tht vs thh: tie (or close :))
tht vs hth: tie (or close :))
tht vs hht: bob
tht vs htt: tie (or close :))
htt vs hhh: alice
htt vs hth: tie (or close :))
htt vs hht: bob
ttt vs hhh: tie (or close :))
ttt vs thh: bob
ttt vs hth: bob
ttt vs tth: tie (or close :))
ttt vs hht: bob
ttt vs tht: bob
ttt vs htt: bob

Cycles:
["thh", "hht", "htt", "tth"]

We now have the cycle. The shortest cycle has length 4 and it is unique. Let us see what happens for patterns of length 4 (I suppress printing the details as there are too many of them):

julia> analyze_game(4, false)

Cycles:
["thhh", "hhth", "hthh"]
["thhh", "hhtt", "ttth"]
["hhth", "hthh", "thht"]
["hhth", "thtt", "tthh"]
["hthh", "hhtt", "thth"]
["hthh", "hhtt", "ttht"]
["hthh", "hhtt", "ttth"]
["thht", "hhtt", "ttth"]
["htht", "thtt", "tthh"]
["thtt", "tthh", "hhht"]
["thtt", "htth", "ttht"]
["thtt", "httt", "ttht"]
["tthh", "hhht", "htth"]
["tthh", "hhht", "httt"]

In this case we have many cycles that are even shorter as they have length three.

Conclusions

The conclusion is that the game is slightly surprising. We can have cycles of dominance between sequences. I hope you liked this example. Happy summer!

A not so simple coin-tossing game

By: Blog by Bogumił Kamiński

Re-posted from: https://bkamins.github.io/julialang/2024/06/07/probability2.html

Introduction

Two weeks ago I wrote a post about a simple coin tossing game.
Today let me follow up on it with a bit more difficult question and a slightly changed implementation strategy.

The post was written under Julia 1.10.1, DataFrames.jl 1.6.1, and StatsBase.jl 0.34.3.

The problem

Let me describe the setting of a game first (it is similar to what I described in this post).

Assume Alice and Bob toss a fair coin n times. In each toss head (h) or tail (t) can show up with equal probability.

Alice counts the number of times a ht sequence showed.
Bob counts the number of times a hh sequence showed.

The winner of the game is the person who saw a bigger number of occurrences of their favorite sequence.
So for example for n=3. If we get hhh then Bob wins (seeing 2 occurrences of hh, and Alice saw 0 occurrences of ht). If we get hht there is a tie (both patterns ocurred once). If we get tht Alice wins.

The questions are:

  • Who, on the average sees more occurrences of their favorite pattern?
  • Who is more likely to win this game?

Let us try to answer these questions using Julia as usual.

Simulating one game

We start by writing a simulator of a single game:

using Random

function play(n::Integer)
    seq = randstring("ht", n)
    return (hh=count("hh", seq, overlap=true),
            ht=count("ht", seq, overlap=true))
end

The function is not optimized for speed (as we could even avoid storing the whole sequence),
but I think it nicely shows how powerful library functions in Julia are. The randstring function
allows us to generate random strings. In this case consisting of a random sequence of h and t.
Next the count function allows us to count the number of occurrences of desired patterns.
Note that we use the overlap=true keyword argument to count all occurrences of the pattern
(by default only disjoint occurrences are counted).

Let us check the output of a single run of the game:

julia> play(10)
(hh = 3, ht = 3)

In my case (I did not seed the random number generator) we see that for n=10 we got a sequence that
had both 3 occurrences of hh and ht, so it is a tie.

Testing the properties of the game

Here is a simulator that, for a given n, runs the game reps times and aggregates the results:

using DataFrames
using Statistics
using StatsBase

function sim_play(n::Integer, reps::Integer)
    df = DataFrame([play(n) for _ in 1:reps])
    df.winner = cmp.(df.hh, df.ht)
    agg = combine(df,
                  ["hh", "ht"] .=> [mean std skewness],
                  "winner" .=>
                  [x -> mean(==(i), x) for i in -1:1] .=>
                  ["ht_win", "tie", "hh_win"])
    return insertcols!(agg, 1, "n" => n)
end

What we do in the code is as follows. First we run the game reps times and transform a result into a DataFrame.
Next we add a column denoting the winner of the game. In the "winner" column 1 means that hh won, 0 means a tie, and -1 means that ht won.
Finally we compute the following aggregates (using transformation minilanguage; if you do not have much experience with it you can have a look at this post):

  • mean, standard deviation, and skewness of hh and ht counts;
  • probability that ht wins, that there is a tie and that hh wins.

Here is the result of running the code for reps=1_000_000 and n varying from 2 to 16:

julia> Random.seed!(1234);

julia> reduce(vcat, [sim_play(n, 1_000_000) for n in 2:16])
15×10 DataFrame
 Row │ n      hh_mean   ht_mean   hh_std    ht_std    hh_skewness  ht_skewness   ht_win    tie       hh_win
     │ Int64  Float64   Float64   Float64   Float64   Float64      Float64       Float64   Float64   Float64
─────┼────────────────────────────────────────────────────────────────────────────────────────────────────────
   1 │     2  0.25068   0.249825  0.433405  0.432912     1.15052    1.15578      0.249825  0.499495  0.25068
   2 │     3  0.499893  0.499595  0.706871  0.5          1.06068    0.00162      0.374385  0.375765  0.24985
   3 │     4  0.751224  0.748855  0.902063  0.559496     1.0232     0.00312512   0.373833  0.37559   0.250577
   4 │     5  1.00168   1.00012   1.06192   0.612535     0.940274  -6.5033e-5    0.406445  0.28037   0.313185
   5 │     6  1.25098   1.2493    1.19926   0.661162     0.869559  -0.0012833    0.437276  0.233841  0.328883
   6 │     7  1.49972   1.50011   1.32213   0.707523     0.812272  -0.00190003   0.437774  0.234531  0.327695
   7 │     8  1.75064   1.74802   1.43616   0.750169     0.76024    0.00319491   0.440714  0.211252  0.348034
   8 │     9  1.99906   2.00108   1.53902   0.789413     0.715722   0.000107041  0.451749  0.189353  0.358898
   9 │    10  2.24857   2.25009   1.63787   0.829086     0.676735  -0.00207707   0.45343   0.184585  0.361985
  10 │    11  2.50092   2.50007   1.73343   0.867326     0.646397   0.000650687  0.454418  0.175059  0.370523
  11 │    12  2.74753   2.75065   1.81994   0.901478     0.621238  -0.00118389   0.458332  0.164575  0.377093
  12 │    13  2.99635   3.00128   1.90199   0.935108     0.597227   0.00212776   0.460248  0.159239  0.380513
  13 │    14  3.2469    3.25101   1.9814    0.96887      0.575535  -0.000255108  0.460817  0.154523  0.38466
  14 │    15  3.50074   3.49934   2.05981   0.998945     0.55527    0.000827465  0.461547  0.147699  0.390754
  15 │    16  3.75258   3.7513    2.13521   1.03027      0.538056   0.000772964  0.463627  0.142931  0.393442

What do we learn from these results?

On the average hh and ht occur the same number of times.
We see this from "hh_mean" and "ht_mean" columns.
This is expected. As in a given sequence of two observations hh and ht have the same
probability of occurrence (0.25) the result just follows the linearity of expected value.
We can see that as we increase n the values in these columns increase roughly by 0.25.

However, the probability of ht winning is higher than the probability of hh winning
(except n=2 when it is equal). We can see this from the "ht_win" and "hh_win" columns.
This is surprising as the patterns occur, on the average the same number of times.

To understand the phenomenon we can look at the "hh_std", "ht_std",
"hh_skewness", and "ht_skewness" columns.
We can clearly see that hh pattern count has a higher standard deviation and for n>2 it is positively skewed
(while ht has zero skewness).
This means that hh counts are more spread (i.e. they can be high, but also low).
Additionally we have few quite high values balanced by more low values for hh relatively to ht (as the means for both patterns are the same). This, in turn, means that if hh wins over ht then it wins by a larger margin, but it happens less rarely than seeing ht winning over hh.

The core reason for this behavior was discussed in my previous post. The hh values can cluster (as e.g. in the hhh pattern), while ht patterns cannot overlap.

Conclusions

I hope you found this puzzle interesting. If you are interested how the properties we described can be proven analytically I recommend you check out this paper.

WAW2024 conference: June 3-6, 2024

By: Blog by Bogumił Kamiński

Re-posted from: https://bkamins.github.io/julialang/2024/05/31/waw2024.html

Introduction

Next week I organize WAW2024 conference. The event covers various aspects of theoretical and applied modeling of networks.

As an introduction I want to run a simulation of an example problem. Consider a random graph with a probability of an edge between two nodes equal to p. Next, assume that we pick an edge uniformly at random from this graph and then remove two nodes forming this edge from the graph as matched. The question is what is the expected fraction of nodes that are going to be matched by this process.

Today, I will investigate this problem using simulation.

The post was written under Julia 1.10.1, Graphs.jl 1.11.0, and DataFrames.jl 1.6.1.

The simulation

Here is a simulator of our greedy matching process. In the simulation we traverse all edges of the graph in a random order.
In the matched vector we keep track of which nodes have been already matched.

using Graphs
using Random
using Statistics

function run_sim(n::Integer, p::Real)
    g = erdos_renyi(n, p)
    matched = fill(false, n)
    for e in shuffle!(collect(edges(g)))
        n1, n2 = e.src, e.dst
        if !(matched[n1] || matched[n2])
            matched[n1] = true
            matched[n2] = true
        end
    end
    return mean(matched)
end

The experiment

Let us now test our simulator for a graph on 10000 nodes and p varying from 0.00001 to 0.1 (on logarithmic scale).

julia> using DataFrames

julia> df = DataFrame(p=Float64[], rep=Int[], res=Float64[])
0×3 DataFrame
 Row │ p        rep    res
     │ Float64  Int64  Float64
─────┴─────────────────────────

julia> ps = [10.0^i for i in -5:-1]
5-element Vector{Float64}:
 1.0e-5
 0.0001
 0.001
 0.010000000000000002
 0.1

julia> Random.seed!(1234);

julia> @time for p in ps, rep in 1:16
           push!(df, (p, rep, run_sim(10_000, p)))
       end
 79.190585 seconds (438.02 M allocations: 14.196 GiB, 7.13% gc time, 0.36% compilation time)

julia> df
80×3 DataFrame
 Row │ p        rep    res
     │ Float64  Int64  Float64
─────┼─────────────────────────
   1 │  1.0e-5      1   0.0948
   2 │  1.0e-5      2   0.094
   3 │  1.0e-5      3   0.097
   4 │  1.0e-5      4   0.0892
   5 │  1.0e-5      5   0.0848
   6 │  1.0e-5      6   0.093
  ⋮  │    ⋮       ⋮       ⋮
  76 │  0.1        12   0.9992
  77 │  0.1        13   0.999
  78 │  0.1        14   0.999
  79 │  0.1        15   0.999
  80 │  0.1        16   0.9988
                69 rows omitted

The simulation took a bit over 1 minute, mainly due to the p=0.1 case which generates a lot of edges in the graph.
Let us aggregate the obtained data to get the mean and standard error, and range of the results over all values of p:

julia> combine(groupby(df, "p"),
               "p" => (x -> 10_000 * first(x)) => "mean_degree",
               "res" => mean,
               "res" => (x -> std(x) / sqrt(length(x))) => "res_se",
               "res" => extrema)
5×5 DataFrame
 Row │ p        mean_degree  res_mean  res_se       res_extrema
     │ Float64  Float64      Float64   Float64      Tuple…
─────┼───────────────────────────────────────────────────────────────
   1 │  1.0e-5          0.1  0.090975  0.000842986  (0.0848, 0.097)
   2 │  0.0001          1.0  0.499888  0.00190971   (0.4848, 0.5134)
   3 │  0.001          10.0  0.909425  0.000523729  (0.9062, 0.9126)
   4 │  0.01          100.0  0.990162  0.000257694  (0.9888, 0.992)
   5 │  0.1          1000.0  0.999     5.47723e-5   (0.9986, 0.9994)

We can see that the sharp increase of fraction of matched nodes happens around mean degree of 1 in the graph.
Additionally we see that even for high p we do not match every node in the greedy matching process.
Finally the obtained results are relatively well concentrated around the mean.

Conclusions

If you want to see how this problem can be solved analytically I recommend you to read this paper.
Using the formulas derived there we can compare our simulation results with the asymptotic theory:

julia> (10_000 .* ps) ./ (10_000 .* ps .+ 1)
5-element Vector{Float64}:
 0.09090909090909091
 0.5
 0.9090909090909091
 0.9900990099009901
 0.999000999000999

Indeed we see that the match is quite good.

If such problems are interesting for you I invite you to join us during WAW2024 conference.