Author Archives: Blog by Bogumił Kamiński

The return of the graphs (and an interesting puzzle)

By: Blog by Bogumił Kamiński

Re-posted from: https://bkamins.github.io/julialang/2023/06/23/graphspuzzle.html

Introduction

This week I come back to graphs. The reason is that I have participated
in an inspiring Social Networks & Complex Systems Workshop.

During this workshop Paweł Prałat asked the following puzzle:

You are given eight batteries, out of which four are good and four
are depleted, but visually they do not differ.
You have a flashlight that needs two good batteries to work.
You want your flashlight to work. What is the minimum number of times
you need to put two batteries into your flashlight to be sure it works
in the worst case.

In this post I want to discuss how it can be solved.
We will start with an analytical solution and then give a computational
one. You can judge for yourself which is harder.

The post was written under Julia 1.9.1,
Combinatorics v1.0.2, DataFrames v1.5.0,
SimpleGraphs v0.8.4, and SimpleGraphAlgorithms v0.4.21.

Analytical solution

Before we begin we need to make one observation. If I say that I want to use
a given number of trials in the worst case I can provide them upfront (before
making any tests). The reason is that if I put some batteries into a
flashlight it will work (and then we are done) or not work (and we need to continue).
So the situation is simpler than in many other puzzles when you might want
adjust your strategy conditional on the answer you see to your previous queries.

What we will want to show is that we need 7 trials. Start with showing
that 7 tests is enough. To see this notice that we have 4 good batteries.
Therefore if we split our 8 batteries into three groups there will be one
group that has two batteries (by the pigeonhole principle).
What we need to do is to show that using 7
comparisons we can find a pair of good batteries within one of these
three groups.

This is the way how you can do it.
Assume we number the batteries from 1 to 8. Split them into three groups:
{1, 2, 3}, {4, 5, 6}, and {7, 8}. Now we assume that we make all
possible comparisons within groups. We can see that in the first two groups
there are three comparisons possible, and in the last group only one. Seven
in total. This finishes the proof that 7 comparisons are enough.
We can visualize this solution as follows (line represents the comparison
we make):

  1      4    7
 / \    / \   |
2---3  5---6  8

We are left with showing that 6 comparisons are not enough. To see
this note the following. In the plot above we have a graph on 8 nodes
and having 7 edges. We could claim that we have a solution because in
any four element subset of its nodes there existed at least two connected
by an edge (within one group).

So to show that 6 comparisons are not enough we must show that no matter
how we make them there will always be 4 nodes that are mutually not
connected. We will prove that by contradiction. Assume we have some assignment
of edges under which maximally three nodes are not connected. Without loss
of generality take that their numbers are 1, 2, and 3. But this means that each
of the nodes 4, 5, 6, 7, and 8 must be connected to one of the 1, 2, or 3 nodes
by at least one edge. So we must use-up 5 edges to make these connections.
We are left with one edge (recall we assume that we can use 6 edges in total).
If this one last edge is connected to node 1, 2, or 3. Then nodes 4, 5, 6, 7, and 8
are not connected directly and we just found a 5-element set that is not connected by an edge.
So assume that this edge connects one pair of nodes from the set {4, 5, 6, 7, 8}.
However, since we only have one edge we still will have 4 nodes that are not connected.
E.g. if 4 and 5 are connected then the set of nodes {4, 6, 7, 8} are not connected,
so we have a 4 element set of nodes that are not connected. A contradiction to the assumption
that there were maximally three nodes that are not connected. In conclusion – 6 comparisons
are not enough.

(If you would like to see an alternative proof that uses the probabilistic method
you can reach out to Paweł Prałat who has shown it to me.)

Computational solution

Now let us move to the brute force and computation (and in the process hopefully learn some Julia tricks).

First load the required packages and do some setup:

julia> using Combinatorics

julia> using DataFrames

julia> using SimpleGraphs

julia> using SimpleGraphAlgorithms

julia> use_Cbc()
[ Info: Solver Cbc verbose is set to false

As we saw in the analytical solution we can represent our queries as graphs on 8 nodes
and some edges.

The problem is that there are potentially many such graphs. Therefore we will want to limit
our search to the graphs that are not isomorphic. Two graphs are isomorphic if you can
get one from the other by re-labelling the nodes. Clearly such two graphs are undistinguishable
for our purposes.

What we want to show is that all graphs on 8 nodes and 6 edges contain at least 4 nodes that
are not connected by an edge. And that there exist graphs on 8 nodes and 7 edges in which
the maximum number of unconnected nodes is 3.

So how do we create a list of all non-isomorphic graphs having 6 and 7 edges respectively?

Let us start with a simpler case of graphs on 8 nodes and 3 edges and list non-isomorphic graphs:

julia> function create_graph(es)
           g = IntGraph(8)
           for e in es
               add!(g, e...)
           end
           return g
       end
create_graph (generic function with 1 method)

julia> g3 = map(create_graph, combinations([(i, j) for i in 1:7 for j in i+1:8], 3));

julia> hg3 = uhash.(g3);

julia> g3df = DataFrame(hg=hg3, g=g3);

julia> g3gdf = groupby(g3df, :hg);

julia> redirect_stdout(devnull) do
           for sdf in g3gdf
               for i in 2:nrow(sdf)
                   @assert is_iso(sdf.g[1], sdf.g[i])
               end
           end
       end

julia> noniso3 = combine(g3gdf, first).g;

julia> elist.(noniso3)
5-element Vector{Vector{Tuple{Int64, Int64}}}:
 [(1, 2), (1, 3), (1, 4)]
 [(1, 2), (1, 3), (2, 3)]
 [(1, 2), (1, 3), (2, 4)]
 [(1, 2), (1, 3), (4, 5)]
 [(1, 2), (3, 4), (5, 6)]

Let us explain what we do step by step. The g3 object contains all
graphs on three edges. Let us check how many of them we have:

julia> length(g3)
3276

There are a lot of graphs. But most of them are isomorphic. How do we prune them?
Using the uhash function we compute the hash of each graph.
uhash guarantees us that graphs having a different hash are not isomorphic.
The g3gdf is a GroupedDataFrame that keeps these graphs grouped by their
hash value. We have 5 such groups as can be checked in:

julia> length(g3gdf)
5

However, maybe this is the case that we have non-isomorphic graphs that
have the same hash value (this is unlikely but possible). We check it with
the is_iso function. If they would not be isomorphic @assert would error.
Since it does not we are good. Note that I use the redirect_stdout(devnull)
trick to avoid printing any output that is_iso produces. The reason
is that it calls CBC solver which prints to the screen its status (and since
we do over 3000 calls the screen would be flooded by not very useful output).

With the elist.(noniso3) we can see what are the edges of the five non-isomorphic
graphs that exist on 3 edges.
(And since we have only 5 graphs you can probably convince yourself using pen
and paper that we have found all possible options.)

How do we do this process for a larger number of edges.
The same approach would work, but would be much more time consuming (there are
over 1 million graphs on 7 edges). So we use the following trick, we take the
non-isomorphic graphs with 3 edges and add one edge to them. We now get graphs
on 4 edges. Some of them are isomorphic. But we already know how to prune them
to be left only with non-isomorphic ones.

The procedure that iteratively performs this task up to 7 edges is as follows:

julia> function add_possible_edges(g::T) where T
           res = T[]
           for i in 1:7, j in i+1:8
               if !has(g, i, j)
                   newg = deepcopy(g)
                   add!(newg, i, j)
                   @assert NE(newg) == NE(g) + 1
                   push!(res, newg)
               end
                   end
           return res
       end
add_possible_edges (generic function with 1 method)

julia> function grow_graphs(noniso)
           g = reduce(vcat, add_possible_edges.(noniso))
           hg = uhash.(g)
           gdf = DataFrame(; hg, g)
           ggdf = groupby(gdf, :hg)
           redirect_stdout(devnull) do
               for sdf in ggdf
                   for i in 2:nrow(sdf)
                       @assert is_iso(sdf.g[1], sdf.g[i])
                   end
               end
           end
           return combine(ggdf, first).g
       end
grow_graphs (generic function with 1 method)

julia> noniso4 = grow_graphs(noniso3)
11-element Vector{UndirectedGraph{Int64}}:
 UndirectedGraph{Int64} (n=8, m=4)
 ⋮
 UndirectedGraph{Int64} (n=8, m=4)

julia> noniso5 = grow_graphs(noniso4)
24-element Vector{UndirectedGraph{Int64}}:
 UndirectedGraph{Int64} (n=8, m=5)
 ⋮
 UndirectedGraph{Int64} (n=8, m=5)

julia> noniso6 = grow_graphs(noniso5)
56-element Vector{UndirectedGraph{Int64}}:
 UndirectedGraph{Int64} (n=8, m=6)
 ⋮
 UndirectedGraph{Int64} (n=8, m=6)

julia> noniso7 = grow_graphs(noniso6)
115-element Vector{UndirectedGraph{Int64}}:
 UndirectedGraph{Int64} (n=8, m=7)
 ⋮
 UndirectedGraph{Int64} (n=8, m=7)

In the process we learn that there are 11 non-isomorphic graphs having 4 edges,
and respectively 24 for 5 edges, 56 for 6 edges and 115 for 7 edges.

Now for each of these graphs let us find a maximum number of nodes that are
not connected. This can be done using the max_indep_set function.
Again we use the devnull trick to avoid printing of the output:

julia> mis6 = redirect_stdout(devnull) do
           return max_indep_set.(noniso6)
       end
56-element Vector{Set{Int64}}:
 Set([5, 4, 6, 7, 2, 8, 3])
 ⋮
 Set([4, 7, 8, 3])

julia> minimum(length.(mis6))
4

So we first see that for graphs on 6 edges we indeed have at least 4 nodes in the
independent set.

Let us now check the 7 node case:

julia> mis7 = redirect_stdout(devnull) do
           return max_indep_set.(noniso7)
       end
115-element Vector{Set{Int64}}:
 Set([5, 4, 6, 7, 2, 8, 3])
 ⋮
 Set([7, 2, 8, 3])

julia> minimum(length.(mis7))
3

julia> elist.(noniso7[length.(mis7) .== 3])
1-element Vector{Vector{Tuple{Int64, Int64}}}:
 [(1, 2), (1, 3), (2, 3), (4, 5), (4, 6), (5, 6), (7, 8)]

Here we see that there exists only one graph (up to isomorphism)
that has a property that at most three nodes are independent.
And looking at its edges it is the same graph that we have drawn
in our analytical solution.

Conclusions

So is the analytical or computational solution more interesting?
For me both have their value and were fun.

If you like such puzzles, and do plan ahead, please consider joining
us next year. From June 3 to 7, 2024 we are going to host
the WAW2024: 19th Workshop on Modelling and Mining Networks
at SGH Warsaw School of Economics. We invite all enthusiasts of graphs:
both theoreticians and practitioners.

Homographs in DataFrames.jl

By: Blog by Bogumił Kamiński

Re-posted from: https://bkamins.github.io/julialang/2023/06/16/homographs.html

Introduction

Last week I posted about graphs, so I thought to post about homographs today.

From your English lessons you probably remember that homographs are words that share
the same written form but have a different meaning.

Starting with Julia 1.9 release we have a homograph in DataFrames.jl. This is the
stack function and I will cover it today.

The post was written under Julia 1.9.1 and DataFrames.jl 1.5.0.

Homographs and multiple dispatch

Julia supports multiple dispatch. This means that you can define specialized methods
for the same function depending on the types of its arguments.

For example if you write 1 + 2 and 1.0 + 2.0 internally different methods of the
+ are invoked, one working with integers, and the other working with floats.

However, there is one important rule that should be followed. If you add methods
to some function they should perform conceptually similar operations. For example,
1 + 2 produces 3 and 1.0 + 2.0 produces 3.0. In both cases an addition was done.

The reason for this rule is that otherwise when you see code like f(x) you would not
know what the function f does until you know the type of x.

Unfortunately, sometimes such situations happen. Let me give you a story of the stack
function. It has been defined in DataFrames.jl for many years and is used to perform
wide to long conversion of data frames. Recently Julia maintainers decided that
it makes sense to add the stack function to Base module and make it combine
a collection of arrays into one larger array.

In such a situation, as DataFrames.jl maintainers we had two options:

  1. keep Base.stack and DataFrames.stack functions separate;
  2. make stack from DataFrames.jl add a method to Base.stack.

In general the first option is preferable. Base.stack and DataFrames.stack
do different things so they should be separate functions. However, there is one
problem with this approach. All legacy code that used stack from DataFrames.jl
would stop working and users would need to write DataFrames.stack instead.
This is something we did not want so we decided to go for option 2, that is,
add a method for Base.stack that would handle data frames. The reason why we
decided for this is that there is very low risk of confusion since stack from
DataFrames.jl always requires an AbstractDataFrame as its first argument.
You can see it here:

julia> using DataFrames

julia> methods(stack)
# 6 methods for generic function "stack" from Base:
 [1] stack(df::AbstractDataFrame)
     @ DataFrames ~\.julia\packages\DataFrames\LteEl\src\abstractdataframe\reshape.jl:136
 [2] stack(df::AbstractDataFrame, measure_vars)
     @ DataFrames ~\.julia\packages\DataFrames\LteEl\src\abstractdataframe\reshape.jl:136
 [3] stack(df::AbstractDataFrame, measure_vars, id_vars; variable_name, value_name, view, variable_eltype)
     @ DataFrames ~\.julia\packages\DataFrames\LteEl\src\abstractdataframe\reshape.jl:136
 [4] stack(iter; dims)
     @ abstractarray.jl:2743
 [5] stack(f, iter; dims)
     @ abstractarray.jl:2772
 [6] stack(f, xs, yzs...; dims)
     @ abstractarray.jl:2773

At the same time standard Base.stack does not work with data frames at all.

OK, enough theory. Let us have a look at stack from Base and from DataFrames.jl
in action.

Combining collections of arrays

I will concentrate here on the simplest (and most often needed scenarios).
Assume you have a vector of vectors of equal length:

julia> x = [1:2, 3:4, 5:6]
3-element Vector{UnitRange{Int64}}:
 1:2
 3:4
 5:6

We could want to turn it into a matrix. There are two ways you could want to do it.
The first is to put these vectors as rows in the produced matrix, like this:

julia> stack(x)
2×3 Matrix{Int64}:
 1  3  5
 2  4  6

The second is to put them as columns (this is often needed and in the past I always
used permutedims to get this, which was a bit cumbersome):

julia> stack(x, dims=1)
3×2 Matrix{Int64}:
 1  2
 3  4
 5  6

The third commonly used pattern is when you want to apply a function to a vector of
values and this function returns another vector or e.g. a tuple. Let us have a look
at an example:

julia> using Statistics

julia> f(x) = [sum(x), mean(x)]
f (generic function with 1 method)

julia> f.(x)
3-element Vector{Vector{Float64}}:
 [3.0, 1.5]
 [7.0, 3.5]
 [11.0, 5.5]

This is the traditional, broadcasting way, to apply the function to such a vector.
However, often you want the result in a flat matrix. Now you can do:

julia> stack(f.(x))
2×3 Matrix{Float64}:
 3.0  7.0  11.0
 1.5  3.5   5.5

which can be done even simpler by just writing:

julia> stack(f, x)
2×3 Matrix{Float64}:
 3.0  7.0  11.0
 1.5  3.5   5.5

In summary, Base.stack is a super nice little utility that comes handy very often
if you work with arrays.

Transforming data from wide to long format

In DataFrames.jl the stack transforms data from wide to long format.
Assume you have the following input data frame:

julia> df = DataFrame(year=[2020, 2021], Spring=1:2, Summer=3:4, Autumn=5:6, Winter=7:8)
2×5 DataFrame
 Row │ year   Spring  Summer  Autumn  Winter
     │ Int64  Int64   Int64   Int64   Int64
─────┼───────────────────────────────────────
   1 │  2020       1       3       5       7
   2 │  2021       2       4       6       8

It is in wide format. For each year you have four columns for each season holding some values.
Assume we want instead a narrow data frame with year-season combinations and one column with values.
With DataFrames.stack it is enough to pass a data frame and specify which columns hold the values:

julia> stack(df, Not(:year))
8×3 DataFrame
 Row │ year   variable  value
     │ Int64  String    Int64
─────┼────────────────────────
   1 │  2020  Spring        1
   2 │  2021  Spring        2
   3 │  2020  Summer        3
   4 │  2021  Summer        4
   5 │  2020  Autumn        5
   6 │  2021  Autumn        6
   7 │  2020  Winter        7
   8 │  2021  Winter        8

Or, if you want to be more fancy you can e.g. change the generated column names:

julia> stack(df, Not(:year), variable_name="season", value_name="number")
8×3 DataFrame
 Row │ year   season  number
     │ Int64  String  Int64
─────┼───────────────────────
   1 │  2020  Spring       1
   2 │  2021  Spring       2
   3 │  2020  Summer       3
   4 │  2021  Summer       4
   5 │  2020  Autumn       5
   6 │  2021  Autumn       6
   7 │  2020  Winter       7
   8 │  2021  Winter       8

Conclusions

In this post I have given the basic examples of Base.stack and DataFrames.stack usage.
I recommend you to have a look at their documentation for more complete information.
However, the point is that both functions are quite useful in daily data wrangling so it is
useful to know them.

Additionally, I wanted to highlight some general considerations of package design in Julia
and the challenges that maintainers face. In the specific example of stack we decided to break
the all methods of a function should do a similar thing rule in favor of user convenience and making
sure that we keep the legacy DataFrames.jl code working.

Refreshing Mining Complex Networks book

By: Blog by Bogumił Kamiński

Re-posted from: https://bkamins.github.io/julialang/2023/06/09/graphs.html

Introduction

Two years ago together with Paweł Prałat and François Théberge
we have written Mining Complex Networks book.
Since it has received a positive response from the community
we decided to start planning for its second edition, where
we want to add material that covers recent advances in the field.

This decision prompted me to write something about graph analysis
that is at the same time a nice application of the Julia language.

The post was written under Julia 1.9.0 and Graphs.jl 1.8.0.

The problem

Let me start with a business scenario.
Assume you have a set of products in a store and know which of them
are bought together. You represent this data as a graph where
nodes are products and edges are representing co-purchase of products.

You want to find products that are not bought together, but that
are fall into similar baskets. For example, assume you have two types
of milk. Most likely they are not bought together, but they are both
bought with e.g. bread. Such products could be called substitutes.

In the book we discuss some advanced methods that can be used for this task.
In this post let me concentrate on a simple way to detect such products.

Assume that I have four products i, j, k, and l. If in the
graph we have edges i-j, j-k, k-l, and l-i, but do not have
edges i-k and j-l then we can say that i and k are substitutes
and j and l are substitutes (so this four nodes could be called
double-substitutes).

From a graph theory perspective the set of nodes {i, j, k, l} form a
cycle of length 4 and they do not form any sorter cycle
(so this cycle has a hole inside it). Let us call such cycles minimal.
We can visualize this situation for example as follows:

  i
 / \
l   j
 \ /
  k

Our task for today is the following. Assume we have a random graph
with n nodes. In this graph each edge is included with probability
p, independently from every other edge. We want to check how
many minimal 4-cycles it contains.

Analytical solution

For a random graph it is possible to compute the expected number of
minimal 4-cycles relatively easily.

Using the additivity of expectation we can concentrate on four-element
subsets of sets of nodes. For each such subset {i, j, k, l}
we have three possibilities that they can form a minimal 4-cycle:

  i        i        i
 / \      / \      / \
l   j    l   k    k   j
 \ /      \ /      \ /
  k        j        l

Note that the probability of observing any of them is p^4*(1-p)^2
(we see 4 edges and do not see 2 edges).

Therefore we can easily write a function that computes the expected
number of such motifs as:

expected_count_empty_4cycle(n, p) = p^4*(1-p)^2 * 3 * binomial(n, 4)

Let us calculate the expected number of such motifs for n equal to 10
and 100 and p equal to 0.1 and 0.2:

julia> expected_count_empty_4cycle.([10, 100], [0.1 0.2 0.3])
2×3 Matrix{Float64}:
   0.05103      0.64512      2.50047
 952.858    12046.0      46690.0

The function works fast and nice, but what if we made a mistake when defining it?
With Julia it is easy to run a simulation checking the result.

Simulation solution, part 1

Let us start with a solution that reproduces the thinking we used to derive
our analytical result:

function naive_count_empty_4cycle(g)
    empty4cycle = 0
    for i in 1:nv(g), j in i+1:nv(g), k in j+1:nv(g), l in k+1:nv(g)
        empty4cycle += has_edge(g, i, j) && has_edge(g, j, k) &&
                       has_edge(g, k, l) && has_edge(g, l, i) &&
                       !has_edge(g, i, k) && !has_edge(g, j, l)
        empty4cycle += has_edge(g, i, k) && has_edge(g, k, j) &&
                       has_edge(g, j, l) && has_edge(g, l, i) &&
                       !has_edge(g, i, j) && !has_edge(g, k, l)
        empty4cycle += has_edge(g, i, j) && has_edge(g, j, l) &&
                       has_edge(g, l, k) && has_edge(g, k, i) &&
                       !has_edge(g, i, l) && !has_edge(g, j, k)
    end
    return empty4cycle
end

The function expects to get a Graphs.jl graph g and traverses all
subsets of its nodes.

Let us check it:

julia> using Graphs

julia> using Random

julia> using Statistics

julia> Random.seed!(1234);

julia> [mean(naive_count_empty_4cycle(erdos_renyi(10, p)) for i in 1:1000)
        for p in [0.1, 0.2, 0.3]]
3-element Vector{Float64}:
 0.051
 0.598
 2.455

So the results look good for n=10. But what about n=100?

Let us check timing of a single run:

julia> @time naive_count_empty_4cycle(erdos_renyi(100, 0.1));
  0.142410 seconds (274 allocations: 38.594 KiB)

We can see that the function is slow. If we wanted to run it 1000 times it would take us
over 2 minutes. Let us think of a faster approach.

Simulation solution, part 2

The idea we can use is the following. Consider two nodes i and j and assume they
are not connected. Then it is enough to find common neighbors of i and j
and check how many pairs of them are not connected.

Here is an example implementation of this idea:

function find_common_sorted!(common, ni, nj)
    empty!(common)
    iidx = 1
    jidx = 1
    while iidx <= length(ni) && jidx <= length(nj)
        if ni[iidx] < nj[jidx]
            iidx += 1
        elseif ni[iidx] > nj[jidx]
            jidx += 1
        else
            push!(common, ni[iidx])
            iidx += 1
            jidx += 1
        end
    end
    nothing
end

function fast_count_empty_4cycle(g)
    common = Int[]
    empty4cycle = 0

    for i in 1:nv(g)
        ni = neighbors(g, i)
        for j in i+1:nv(g)
            has_edge(g, i, j) && continue
            nj = neighbors(g, j)
            find_common_sorted!(common, ni, nj)
            for a in 1:length(common), b in a+1:length(common)
                empty4cycle += !has_edge(g, common[a], common[b])
            end
        end
    end
    @assert iseven(empty4cycle)
    return empty4cycle ÷ 2
end

Note that in the code we divide the number of empty cycles found by two as
assuming that nodes {i, j, k, l} form an empty cycle we count it twice
(once starting with {i, k} pair and once starting with {j, l} pair).

Let us check our function:

julia> @time fast_count_empty_4cycle(erdos_renyi(100, 0.1));
  0.001491 seconds (280 allocations: 40.047 KiB)

Indeed it is significantly faster. Therefore we can use it to also check the n=100 case:

julia> [mean(fast_count_empty_4cycle(erdos_renyi(10, p)) for i in 1:1000)
        for p in [0.1, 0.2, 0.3]]
3-element Vector{Float64}:
 0.051
 0.598
 2.455

julia> [mean(fast_count_empty_4cycle(erdos_renyi(100, p)) for i in 1:1000)
        for p in [0.1, 0.2, 0.3]]
3-element Vector{Float64}:
   954.364
 12017.067
 46574.827

The obtained results confirm our analytical solution, so we have some more confidence
that indeed we have derived it correctly.

Conclusions

I hope you found the problem of finding the number of minimal 4-cycles interesting.
Personally, really enjoyed coding it in Julia. In particular note that in Julia
with our faster version of code we almost do not do any allocations:

julia> gr = erdos_renyi(1000, 0.1); @time fast_count_empty_4cycle(gr)
  2.534875 seconds (4 allocations: 496 bytes)
10169170

julia> expected_count_empty_4cycle.(1000, 0.1)
1.0064361314250002e7

and the code runs quite fast.