Re-posted from: http://www.stochasticlifestyle.com/233-2/
Datastructures.jl claims it’s fast. How does it do? I wrote some quick codes to check it out. What I wanted to do is find out which algorithm does best for implementing a stack where each element is three integers. I tried filling a pre-allocated array, pushing into three separate vectors, and different implementations of the stack from the DataStructures.jl package.
function baseline() stack = Array{Int64,2}(1000000,3) for i=1:1000000,j=1:3 stack[i,j]=i end end function baseline2() stack = Array{Int64,2}(1000000,3) for j=1:3,i=1:1000000 stack[i,j]=i end end function f0() stack = Array{Int64}(1000000,3) for i = 1:1000000 stack[i,:] = [i,i,i] end end function f02() stack = Array{Int64}(3,1000000) for i = 1:1000000 stack[:,i] = [i;i;i] end end function f1() stack1 = Vector{Int64}(1) stack2 = Vector{Int64}(1) stack3 = Vector{Int64}(1) for i = 1:1000000 push!(stack1,i) push!(stack2,i) push!(stack3,i) end end function f2() stack1 = Stack(Int) stack2 = Stack(Int) stack3 = Stack(Int) for i = 1:1000000 push!(stack1,i) push!(stack2,i) push!(stack3,i) end end function f3() stack = Stack{}(Tuple{Int64,Int64,Int64}) for i = 1:1000000 push!(stack,(i,i,i)) end end function f4() stack = Stack{}(Vector{Int64}) for i = 1:1000000 push!(stack,[i,i,i]) end end using Benchmark using DataStructures base = benchmark(baseline,"baseline",1000) println(base) base2 = benchmark(baseline2,"baseline2",1000) println(base2) df0 = benchmark(f0,"array",1000) println(df0) df02 = benchmark(f02,"arrayTranspose",1000) println(df02) df1 = benchmark(f1,"vectorStack",1000) println(df1) df2 = benchmark(f2,"dsStacks",1000) println(df2) df3 = benchmark(f3,"dsStackTuple",1000) println(df3) df4 = benchmark(f4,"dsStackVector",1000) println(df4)
The results were as follows:
| Row | Category | Benchmark | Iterations | TotalWall | AverageWall | |-----|------------|------------|------------|-----------|-------------| | 1 | "baseline" | "baseline" | 1000 | 11.7169 | 0.0117169 | | Row | MaxWall | MinWall | Timestamp | |-----|-----------|------------|-----------------------| | 1 | 0.0158455 | 0.00978837 | "2016-02-29 23:23:51" | | Row | JuliaHash | CodeHash | OS | |-----|--------------------------------------------|----------|-----------| | 1 | "a2f713dea5ac6320d8dcf2835ac4a37ea751af05" | NA | "Windows" | | Row | CPUCores | |-----|----------| | 1 | 8 | 1x12 DataFrames.DataFrame | Row | Category | Benchmark | Iterations | TotalWall | AverageWall | |-----|-------------|-------------|------------|-----------|-------------| | 1 | "baseline2" | "baseline2" | 1000 | 9.84362 | 0.00984362 | | Row | MaxWall | MinWall | Timestamp | |-----|-----------|------------|-----------------------| | 1 | 0.0126953 | 0.00694176 | "2016-02-29 23:24:01" | | Row | JuliaHash | CodeHash | OS | |-----|--------------------------------------------|----------|-----------| | 1 | "a2f713dea5ac6320d8dcf2835ac4a37ea751af05" | NA | "Windows" | | Row | CPUCores | |-----|----------| | 1 | 8 | 1x12 DataFrames.DataFrame | Row | Category | Benchmark | Iterations | TotalWall | AverageWall | MaxWall | |-----|----------|-----------|------------|-----------|-------------|----------| | 1 | "array" | "array" | 1000 | 114.288 | 0.114288 | 0.172499 | | Row | MinWall | Timestamp | |-----|-----------|-----------------------| | 1 | 0.0775942 | "2016-02-29 22:45:42" | | Row | JuliaHash | CodeHash | OS | |-----|--------------------------------------------|----------|-----------| | 1 | "a2f713dea5ac6320d8dcf2835ac4a37ea751af05" | NA | "Windows" | | Row | CPUCores | |-----|----------| | 1 | 8 | 1x12 DataFrames.DataFrame | Row | Category | Benchmark | Iterations | TotalWall | |-----|------------------|------------------|------------|-----------| | 1 | "arrayTranspose" | "arrayTranspose" | 1000 | 110.981 | | Row | AverageWall | MaxWall | MinWall | Timestamp | |-----|-------------|----------|-----------|-----------------------| | 1 | 0.110981 | 0.183495 | 0.0741138 | "2016-02-29 22:47:34" | | Row | JuliaHash | CodeHash | OS | |-----|--------------------------------------------|----------|-----------| | 1 | "a2f713dea5ac6320d8dcf2835ac4a37ea751af05" | NA | "Windows" | | Row | CPUCores | |-----|----------| | 1 | 8 | 1x12 DataFrames.DataFrame | Row | Category | Benchmark | Iterations | TotalWall | AverageWall | |-----|---------------|---------------|------------|-----------|-------------| | 1 | "vectorStack" | "vectorStack" | 1000 | 34.4623 | 0.0344623 | | Row | MaxWall | MinWall | Timestamp | |-----|-----------|-----------|-----------------------| | 1 | 0.0455326 | 0.0285367 | "2016-02-29 22:48:09" | | Row | JuliaHash | CodeHash | OS | |-----|--------------------------------------------|----------|-----------| | 1 | "a2f713dea5ac6320d8dcf2835ac4a37ea751af05" | NA | "Windows" | | Row | CPUCores | |-----|----------| | 1 | 8 | 1x12 DataFrames.DataFrame | Row | Category | Benchmark | Iterations | TotalWall | AverageWall | |-----|------------|------------|------------|-----------|-------------| | 1 | "dsStacks" | "dsStacks" | 1000 | 38.0762 | 0.0380762 | | Row | MaxWall | MinWall | Timestamp | |-----|-----------|-----------|-----------------------| | 1 | 0.0508213 | 0.0303853 | "2016-02-29 22:48:47" | | Row | JuliaHash | CodeHash | OS | |-----|--------------------------------------------|----------|-----------| | 1 | "a2f713dea5ac6320d8dcf2835ac4a37ea751af05" | NA | "Windows" | | Row | CPUCores | |-----|----------| | 1 | 8 | 1x12 DataFrames.DataFrame | Row | Category | Benchmark | Iterations | TotalWall | AverageWall | |-----|----------------|----------------|------------|-----------|-------------| | 1 | "dsStackTuple" | "dsStackTuple" | 1000 | 19.3516 | 0.0193516 | | Row | MaxWall | MinWall | Timestamp | |-----|-----------|-----------|-----------------------| | 1 | 0.0296347 | 0.0140451 | "2016-02-29 22:49:06" | | Row | JuliaHash | CodeHash | OS | |-----|--------------------------------------------|----------|-----------| | 1 | "a2f713dea5ac6320d8dcf2835ac4a37ea751af05" | NA | "Windows" | | Row | CPUCores | |-----|----------| | 1 | 8 | 1x12 DataFrames.DataFrame | Row | Category | Benchmark | Iterations | TotalWall | |-----|-----------------|-----------------|------------|-----------| | 1 | "dsStackVector" | "dsStackVector" | 1000 | 184.126 | | Row | AverageWall | MaxWall | MinWall | Timestamp | |-----|-------------|----------|---------|-----------------------| | 1 | 0.184126 | 0.227575 | 0.16454 | "2016-02-29 22:52:11" | | Row | JuliaHash | CodeHash | OS | |-----|--------------------------------------------|----------|-----------| | 1 | "a2f713dea5ac6320d8dcf2835ac4a37ea751af05" | NA | "Windows" | | Row | CPUCores | |-----|----------| | 1 | 8 | 1x12 DataFrames.DataFrame | Row | Category | Benchmark | Iterations | TotalWall | AverageWall | |-----|---------------|---------------|------------|-----------|-------------| | 1 | "vectorTuple" | "vectorTuple" | 1000 | 23.65 | 0.02365 | | Row | MaxWall | MinWall | Timestamp | |-----|-----------|-----------|-----------------------| | 1 | 0.0375346 | 0.0200302 | "2016-02-29 23:29:45" | | Row | JuliaHash | CodeHash | OS | |-----|--------------------------------------------|----------|-----------| | 1 | "a2f713dea5ac6320d8dcf2835ac4a37ea751af05" | NA | "Windows" | | Row | CPUCores | |-----|----------| | 1 | 8 |
Things to learn from this are:
- Using a tuple is by far the fastest.
- Datastructures.jl does beat out all except the pre-allocated array
- The standard vector is pretty close to the DataStructures.jl result
The end result is: use arrays when you can pre-allocate and need mutability, but if you want to throw and retrieve things from a dynamic data structure, using tuples is key. Datastructures.jl has some nice features and (obviously) implementations of data structures, and although they are slightly faster than the native implementation, don’t expect a massive speedup. Still, it’s a well-made package you should try out.
The post Benchmarks of Multidimensional Stack Implementations in Julia appeared first on Stochastic Lifestyle.