Genie Builder Beta Launch | JuliaHub

By: JuliaHub

Re-posted from: https://info.juliahub.com/blog/newsletter-march-2024-genie-builder-beta-launch-create-web-apps-and-dashboards-easily

Genie Builder Beta Launch: Genie Builder is the new easy way to build and share Web apps and dashboards using JuliaHub. Genie Builder features drag-and-drop UI editing, single-click deployment and hosting and the combined power and ease-of-use of Julia + JuliaHub. Click here for more information.

JuliaHub at International Battery Seminar and Exhibit (IBSE) in Orlando March 12-15: JuliaHub will showcase JuliaSimBatteries at the International Battery Seminar and Exhibit (IBSE) in Orlando, Florida from March 12-15. JuliaSimBatteries is an advanced tool for simulating lithium-ion batteries, integrating electrical, thermal and degradation physics. JuliaHub will also co-sponsor the Volta Foundation Happy Hour on Tuesday March 12. Click here for more information and to register for the International Battery Seminar and Exhibit (IBSE) and click here for more information and to register for the Volta Foundation Happy Hour.

Free Webinars from JuliaHub: JuliaHub provides free one-hour Webinars on a variety of topics. Registration is free but space is limited. Click the links below to register, or watch past Webinars here.

Webinar

Presenter

Date and Time

Register

Revolutionizing Battery Quality: Strategies for Battery Defect Mitigation using JuliaSim

Dr. Marc Berliner, JuliaSim Batteries Lead Developer and Dr. Ranjan Anantharaman, JuliaHub Sales Engineer

Tuesday March 19

1-2 pm Eastern (US)

Link

Mastering Interactive Dash Apps with Dash.jl: A Flight Traffic Visualization Journey on JuliaHub

Maja Gwóźdź, JuliaHub and ETH Zurich

Thursday March 21

1-2 pm Eastern (US)

Link

Hands-On Parallelizing Simulations and Parameter Estimation with JuliaSim

Dr. Ranjan Anantharaman, JuliaHub Sales Engineer

Tuesday March 26

1-2 pm Eastern (US)

Link

Advancements in Acausal Modeling with ModelingToolkit v9

Dr. Chris Rackauckas, JuliaHub VP Modeling and Simulation

Tuesday April 2

1-2 pm Eastern (US)

Link

Comparative Analysis of Cell Chemistries with JuliaSim Batteries

Dr. Marc Berliner, JuliaSim Batteries Lead Developer

Wednesday April 17

1-2 pm Eastern (US)

Link

Time Series Forecasting: Predictive Analytics for Future Insights

Phil Vernes, JuliaHub Sales Engineer

Friday April 26

1-2 pm Eastern (US)

Link

JuliaCon 2024: JuliaCon 2024 takes place from July 9-13 in Eindhoven, Netherlands. Workshops will take place Tuesday July 9, presentations Wednesday July 10 through Friday July 12 and the hackathon will be Saturday July 13. Click here for more information, to register and to take advantage of early bird pricing.

Barcelona Julia Meetup: Adopting Julia – ASML’s Journey and How to Transition from MATLAB will be held Thursday April 11 in Barcelona. Click here to register. Presentations include:

  • ASML’s Julia Journey – Jorge Vieyra, ASML Senior Development Engineer
  • From MATLAB to Julia: Learnings – Gareth Thomas, VersionBay Co-Founder
  • A Beginner’s Intro to Julia – Pere Giménez, Genie Data Science Advocate

JuliaHub Blog Posts: JuliaHub has published several new blog posts. Click below to read.

Symbolic Numerics: Dr. Chris Rackauckas (JuliaHub VP Modeling & Simulation) has also published a new blog post on symbolic numerics. Click here to read Symbolic-Numerics: How Compiler Smarts Can Help Improve the Performance of Numerical Methods (Nonlinear Solvers in Julia).

Semantic Versioning (Semver): Semantic Versioning (Semver) Is Flawed, and Downgrade CI Is Required to Fix It is another new blog post from Dr. Chris Rackauckas (JuliaHub VP Modeling & Simulation). Click here for more.

Physics-Enhanced Deep Surrogates for Partial Differential Equations: MIT and IBM Find Clever AI Ways Around Brute-Force Math is a new article from IEEE Spectrum about Physics-Enhanced Deep Surrogates for Partial Differential Equations, an article co-authored by Dr. Chris Rackauckas (JuliaHub VP Modeling and Simulation) which was published in Nature Machine Intelligence in December.

This Month in Julia World: Stefan Krastanov’s monthly Julia newsletter contains the latest Julia developments and more. Click here to read.

JuliaHub Consulting Services: Would your organization benefit from a 100x increase in simulation speeds? JuliaSim might be the solution you need. Click here for more information about JuliaSim, and to contact us to learn how JuliaSim can help your business succeed.

Free Compute on JuliaHub (20 hours): In addition to the features JuliaHub has always offered for free – Julia ecosystem search, package registration tools, a dedicated package server – the platform now also gives every user 20 hours of free compute. This allows people to seamlessly share Pluto notebooks and IDE projects with others and let them get their feet wet with computing without having to open up their wallets. Click here to get started or check out Deep Datta’s introductory video, “JuliaHub Is a Free Platform to Start Your Technical Computing Journey”, where he explains how and why to start using JuliaHub for cloud computing.

Converting from Proprietary Software to Julia: Are you looking to leverage Julia’s superior speed and ease of use, but limited due to legacy software and code? JuliaHub and our partners can help accelerate replacing your existing proprietary applications, improve performance, reduce development time, augment or replace existing systems and provide an extended trusted team to deliver Julia solutions. Leverage experienced resources from JuliaHub and our partners to get your team up and running quickly. For more information, please contact us.

Careers at JuliaHub: JuliaHub is a fast-growing tech company with fully remote employees in 20 countries on 6 continents. Click here to learn more about exciting careers and internships with JuliaHub.

Julia and JuliaHub in the News

  • IEEE Spectrum: MIT and IBM Find Clever AI Ways Around Brute-Force Math
  • BNN Breaking: Julia Programming Language’s 15-Year Journey: Revolutionizing Technical Computing
  • Built In: 6 Reasons to Learn Julia in 2024‍
  • Analytics Insight: Why You Should Learn Julia in 2024
  • Analytics Insight: R, Python, and Julia: A Comparative Study of Data Science
  • Analytics Insight: Julia vs Python: Which One is Best for Data Science in 2024?
  • DataScientest: Python vs Julia: Which Is the Best Language for Data Science?
  • Outlook: Classroom To Career: Transitioning Tips For Aspiring Programmers

Julia Blog Posts

Upcoming Julia and JuliaHub Events

Recent Julia and JuliaHub Events

Contact Us: Please contact us if you want to:

  • Learn more about JuliaHub, JuliaSim, Pumas, PumasQSP or CedarEDA
  • Obtain pricing for Julia consulting projects for your organization
  • Schedule Julia training for your organization
  • Share information about exciting new Julia case studies or use cases
  • Spread the word about an upcoming online or offline event involving Julia
  • Partner with JuliaHub to organize a Julia event online or offline
  • Submit a Julia internship, fellowship or job posting

About JuliaHub and Julia

JuliaHub is a fast and easy-to-use code-to-cloud platform that accelerates the development and deployment of Julia programs. JuliaHub users include some of the most innovative companies in a range of industries including pharmaceuticals, automotive, energy, manufacturing, and semiconductor design and manufacture.

Julia is a high performance open source programming language that powers computationally demanding applications in modeling and simulation, drug development, design of multi-physical systems, electronic design automation, big data analytics, scientific machine learning and artificial intelligence. Julia solves the two language problem by combining the ease of use of Python and R with the speed of C++. Julia provides parallel computing capabilities out of the box and unlimited scalability with minimal

Calibrating an Ornstein–Uhlenbeck Process

By: Dean Markwick's Blog -- Julia

Re-posted from: https://dm13450.github.io/2024/03/09/Calibrating-an-Ornstein-Uhlenbeck-Process.html

Read enough quant finance papers or books and you’ll come across the
Ornstein–Uhlenbeck (OU) process. This is a post that explores the OU
process, the equations, how we can simulate such a process and then estimate the parameters.


Enjoy these types of posts? Then you should sign up for my newsletter.


I’ve briefly touched on mean reversion and OU processes before in my
Stat Arb – An Easy Walkthrough
blog post where we modelled the spread between an asset and its
respective ETF. The whole concept of ‘mean reversion’ is something
that comes up frequently in finance and at different time scales. It
can be thought of as the first basic extension as Brownian motion and
instead of things moving randomly there is now a slight structure
where it be oscillating around a constant value.

The Hudson Thames group have a similar post on OU processes (Mean-Reverting Spread Modeling: Caveats in Calibrating the OU Process) and
my post should be a nice compliment with code and some extensions.

The Ornstein-Uhlenbeck Equation

As a continuous process, we write the change in \(X_t\) as an increment in time and some noise

\[\mathrm{d}X_t = \theta (\mu – x_t) \mathrm{d}t + \sigma \mathrm{d}W_t\]

The amount it changes in time depends on the previous \(X_t\) and to free parameters \(\mu\) and \(\theta\).

  • The \(\mu\) is the long-term drift of the process
  • The \(\theta\) is the mean reversion or momentum parameter depending on the sign.

If \(\theta\) is 0 we can see the equation collapses down to a simple random walk.

If we assume \(\mu = 0\), so the long-term average is 0, then a positive value of \(\theta\) means we see mean reversion. Large values of \(X\) mean the next change is likely to have a negative sign, leading to a smaller value in \(X\).

A negative value of \(\theta\) means the opposite and we end up with a large value in X generating a further large positive change and the process explodes.
E
If discretise the process we can simulate some samples with different parameters to illustrate these two modes.

\[X_{t+1} – X_t = \theta (\mu – X_t) \Delta t + \sigma \sqrt{\Delta t} W_t\]

where \(W_t \sim N(0,1)\).

which is easy to write out in Julia. We can save some time by drawing the random values first and then just summing everything together.

using Distributions, Plots

function simulate_os(theta, mu, sigma, dt, maxT, initial)
    p = Array{Float64}(undef, length(0:dt:maxT))
    p[1] = initial
    w = sigma * rand(Normal(), length(p)) * sqrt(dt)
    for i in 1:(length(p)-1)
        p[i+1] = p[i] + theta*(mu-p[i])*dt + w[i]
    end
    return p
end

We have two classes of OU processes we want to simulate, a mean
reverting \(\theta > 0\) and a momentum version (\(\theta < 0\)) and
we also want to simulate a random walk at the same time, so \(\theta =
0\). We will assume \(\mu = 0\) which keeps the pictures simple.

maxT = 5
dt = 1/(60*60)
vol = 0.005

initial = 0.00*rand(Normal())

p1 = simulate_os(-0.5, 0, vol, dt, maxT, initial)
p2 = simulate_os(0.5, 0, vol, dt, maxT, initial)
p3 = simulate_os(0, 0, vol, dt, maxT, initial)

plot(0:dt:maxT, p1, label = "Momentum")
plot!(0:dt:maxT, p2, label = "Mean Reversion")
plot!(0:dt:maxT, p3, label = "Random Walk")

Different values an OU process can look

The mean reversion (orange) hasn’t moved away from the long-term average (\(\mu=0\)) and the momentum has diverged the furthest from the starting point, which lines up with the name. The random walk, inbetween both as we would expect.

Now we have successfully simulated the process we want to try and
estimate the \(\theta\) parameter from the simulation. We have two
slightly different (but similar methods) to achieve this.

OLS Calibration of an OU Process

When we look at the generating equation we can simply rearrange it into a linear equation.

\[\Delta X = \theta \mu \Delta t – \theta \Delta t X_t + \epsilon\]

and the usual OLS equation

\[y = \alpha + \beta X + \epsilon\]

such that

\[\alpha = \theta \mu \Delta t\]

\[\beta = -\theta \Delta t\]

where \(\epsilon\) is the noise. So we just need a DataFrame with the difference between subsequent observations and relate that to the current observation. Just a diff and a shift.

using DataFrames, DataFramesMeta
momData = DataFrame(y=p1)
momData = @transform(momData, :diffY = [NaN; diff(:y)], :prevY = [NaN; :y[1:(end-1)]])

Then using the standard OLS process from the GLM package.

mdl = lm(@formula(diffY ~ prevY), momData[2:end, :])
alpha, beta = coef(mdl)

theta = -beta / dt
mu = alpha / (theta * dt)

Which gives us \(\mu = 0.0075, \theta = -0.3989\), so close to zero
for the drift and the reversion parameter has the correct sign.

Doing the same for the mean reversion data.

mdl = lm(@formula(diffY ~ prevY), revData[2:end, :])
alpha, beta = coef(mdl)

theta = -beta / dt
mu = alpha / (theta * dt)

This time \(\mu = 0.001\) and \(\theta = 1.2797\). So a little wrong
compared to the true values, but at least the correct sign.

Does Bootstrapping Help?

It could be that we need more data, so we use the bootstrap to randomly sample from the population to give us pseudo-new draws. We use the DataFrames again and pull random rows with replacement to build out the data set. We do this sampling 1000 times.

res = zeros(1000)
for i in 1:1000
    mdl = lm(@formula(diffY ~ prevY + 0), momData[sample(2:nrow(momData), nrow(momData), replace=true), :])
    res[i] = -first(coef(mdl)/dt)
end

bootMom = histogram(res, label = :none, title = "Momentum", color = "#7570b3")
bootMom = vline!(bootMom, [-0.5], label = "Truth", momentum = 2)
bootMom = vline!(bootMom, [0.0], label = :none, color = "black")

We then do the same for the reversion data.

res = zeros(1000)
for i in 1:1000
    mdl = lm(@formula(diffY ~ prevY + 0), revData[sample(2:nrow(revData), nrow(revData), replace=true), :])
    res[i] = first(-coef(mdl)/dt)
end

bootRev = histogram(res, label = :none, title = "Reversion", color = "#1b9e77")
bootRev = vline!(bootRev, [0.5], label = "Truth", lw = 2)
bootRev = vline!(bootRev, [0.0], label = :none, color = "black")

Then combining both the graphs into one plot.

plot(bootMom, bootRev, 
  layout=(2,1),dpi=900, size=(800, 300),
  background_color=:transparent, foreground_color=:black,
     link=:all)

Bootstrapping an OU process

The momentum bootstrap has worked and centred around the correct
value, but the same cannot be said for the reversion plot. However, it
has correctly guessed the sign.

AR(1) Calibration of a OU Process

If we continue assuming that \(\mu = 0\) then we can simplify the OLS
to a 1-parameter regression – OLS without an intercept. From the
generating process, we can see that this is an AR(1) process – each
observation depends on the previous observation by some amount.

\[\phi = \frac{\sum _i X_i X_{i-1}}{\sum _i X_{i-1}^2}\]

then the reversion parameter is calculated as

\[\theta = – \frac{\log \phi}{\Delta t}\]

This gives us a simple equation to calculate \(\theta\) now.

For the momentum sample:

phi = sum(p1[2:end] .* p1[1:(end-1)]) / sum(p1[1:(end-1)] .^2)
-log(phi)/dt

Givens \(\theta = -0.50184\), so very close to the true value.

For the reversion sample

phi = sum(p2[2:end] .* p2[1:(end-1)]) / sum(p2[1:(end-1)] .^2)
-log(phi)/dt

Gives \(\theta = 1.26\), so correct sign, but quite a way off.

Finally, for the random walk

phi = sum(p3[2:end] .* p3[1:(end-1)]) / sum(p3[1:(end-1)] .^2)
-log(phi)/dt

Produces \(\theta = -0.027\), so quite close to zero.

Again, values are similar to what we expect, so our estimation process
appears to be working.

Using Multiple Samples for Calibrating an OU Process

If you aren’t convinced I don’t blame you. Those point estimates above are nowhere near the actual values that simulated the data so it’s hard to believe the estimation method is working. Instead, what we need to do is repeat the process and generate many more price paths and estimate the parameters of each one.

To make things a bit more manageable code-wise though I’m going to
introduce a struct that contains the parameters and allows to
simulate and estimate in a more contained manner.

struct OUProcess
    theta
    mu 
    sigma
    dt
    maxT
    initial
end

We now write specific functions for this object and this allows us to
simplify the code slightly.

function simulate(ou::OUProcess)
    simulate_os(ou.theta, ou.mu, ou.sigma, ou.dt, ou.maxT, ou.initial)
end

function estimate(ou::OUProcess)
   p = simulate(ou)
   phi =  sum(p[2:end] .* p[1:(end-1)]) / sum(p[1:(end-1)] .^2)
   -log(phi)/ou.dt
end

function estimate(ou::OUProcess, N)
    res = zeros(N)
    for i in 1:N
        p = simulate(ou)
        res[i] = estimate(ou)
    end
    res
end

We use these new functions to draw from the process 1,000 times and
sample the parameters for each one, collecting the results as an
array.

ou = OUProcess(0.5, 0.0, vol, dt, maxT, initial)
revPlot = histogram(estimate(ou, 1000), label = :none, title = "Reversion")
vline!(revPlot, [0.5], label = :none);

And the same for the momentum OU process

ou = OUProcess(-0.5, 0.0, vol, dt, maxT, initial)
momPlot = histogram(estimate(ou, 1000), label = :none, title = "Momentum")
vline!(momPlot, [-0.5], label = :none);

Plotting the distribution of the results gives us a decent
understanding of how varied the samples can be.

plot(revPlot, momPlot, layout = (2,1), link=:all)

Multiple sample estimation of an OU process

We can see the heavy-tailed nature of the estimation process, but
thankfully the histograms are centred around the correct number. This
goes to show how difficult it is to estimate the mean reversion
parameter even in this simple setup. So for a real dataset, you need to
work out how to collect more samples or radically adjust how accurate
you think your estimate is.

Summary

We have progressed from simulating an Ornstein-Uhlenbeck process to
estimating its parameters using various methods. We attempted to
enhance the accuracy of the estimates through bootstrapping, but we
discovered that the best approach to improve the estimation is to have
multiple samples.

So if you are trying to fit this type of process on some real world
data, be it the spread between two stocks
(Statistical Arbitrage in the U.S. Equities Market),
client flow (Unwinding Stochastic Order Flow: When to Warehouse Trades) or anything
else you believe might be mean reverting, then understand how much
data you might need to accurately model the process.

Working with a grouped data frame, part 2

By: Blog by Bogumił Kamiński

Re-posted from: https://bkamins.github.io/julialang/2024/03/08/gdf.html

Introduction

This is a follow up to the post from last week. We will continue
discussing how one can work with GroupedDataFrame objects in DataFrames.jl.
Today we focus on indexing of grouped data frames.

The post was written under Julia 1.10.1 and DataFrames.jl 1.6.1.

Warm-up: getting group indices

First create some grouped data frame:

julia> using DataFrames

julia> df = DataFrame(int=[1, 3, 2, 1, 3, 2],
                      str=["a", "a", "c", "c", "b", "b"])
6×2 DataFrame
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     1  a
   2 │     3  a
   3 │     2  c
   4 │     1  c
   5 │     3  b
   6 │     2  b

julia> gdf = groupby(df, :str, sort=true)
GroupedDataFrame with 3 groups based on key: str
First Group (2 rows): str = "a"
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     1  a
   2 │     3  a
⋮
Last Group (2 rows): str = "c"
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     2  c
   2 │     1  c

It is sometimes useful to learn what is a group number of each row of the source data frame df in a grouped data frame gdf.
You can easily get this information with groupindices:

julia> groupindices(gdf)
6-element Vector{Union{Missing, Int64}}:
 1
 1
 3
 3
 2
 2

Extracting a single group

A basic operation when indexing a GroupedDataFrame is to pick a group by its number. Here is an example:

julia> gdf[1]
2×2 SubDataFrame
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     1  a
   2 │     3  a

julia> gdf[2]
2×2 SubDataFrame
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     3  b
   2 │     2  b

julia> gdf[3]
2×2 SubDataFrame
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     2  c
   2 │     1  c

Note, that gdf behaves similarly to a vector. You can even use begin and end in indexing:

julia> gdf[begin]
2×2 SubDataFrame
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     1  a
   2 │     3  a

julia> gdf[end]
2×2 SubDataFrame
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     2  c
   2 │     1  c

Often you might want to extract a group not by its position in gdf, but by the value of the grouping
variable or variables. In this case you can use GroupKey, dictionary, tuple, or named tuple to achieve this.

Let us check how it works. Start with dictionary, tuple, and named tuple:

julia> gdf[Dict("str" => "b")] # dictionary
2×2 SubDataFrame
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     3  b
   2 │     2  b

julia> gdf[("b",)] # tuple
2×2 SubDataFrame
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     3  b
   2 │     2  b

julia> gdf[(; str="b")] # named tuple
2×2 SubDataFrame
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     3  b
   2 │     2  b

With GroupKey we first need to get it from keys, but everything else works the same:

julia> key = keys(gdf)[1]
GroupKey: (str = "a",)

julia> gdf[key]
2×2 SubDataFrame
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     1  a
   2 │     3  a

You might ask why we require passing grouping variable in a container (dictionary, tuple, named tuple, GroupKey)
and not directly pass the required value when indexing? The reason is that if you grouped your data by integer column
the result would be ambiguous. Here is an example showing that under the defined rules there is no such ambiguity:

julia> gdf2 = groupby(df, :int, sort=false)
GroupedDataFrame with 3 groups based on key: int
First Group (2 rows): int = 1
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     1  a
   2 │     1  c
⋮
Last Group (2 rows): int = 2
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     2  c
   2 │     2  b

julia> gdf2[3] # third group
2×2 SubDataFrame
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     2  c
   2 │     2  b

julia> gdf2[(3, )] # group with value of the grouping variable equal to 3
2×2 SubDataFrame
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     3  a
   2 │     3  b

Extracting multiple groups

You now know how to pick a single group, so selecting multiple groups is a natural next step.
You can use a collection of any of the selectors we have already discussed. Here are some examples:

julia> gdf[[3, 1]] # selection by group number
GroupedDataFrame with 2 groups based on key: str
First Group (2 rows): str = "c"
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     2  c
   2 │     1  c
⋮
Last Group (2 rows): str = "a"
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     1  a
   2 │     3  a

julia> gdf[[("c",), ("a",)]] # selection by grouping variable value
GroupedDataFrame with 2 groups based on key: str
First Group (2 rows): str = "c"
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     2  c
   2 │     1  c
⋮
Last Group (2 rows): str = "a"
 Row │ int    str
     │ Int64  String
─────┼───────────────
   1 │     1  a
   2 │     3  a

Note that indexing allows both for reordering and for dropping groups, which often comes handy when analyzing data.
Also note that groupindices is aware of such changes:

julia> groupindices(gdf[[3, 1]])
6-element Vector{Union{Missing, Int64}}:
 2
 2
 1
 1
  missing
  missing

Here group with "c" is first, with "a" is second and with "b" is dropped, so missing is returned in the produced vector.

It is also worth to remember that subset and filter can be used with GroupedDataFrames. This topic is discussed in this post.

Key lookup

Sometimes we do not want to index into a grouped data frame, but just check if it contains some key. This is easily achievable with the haskey function:

julia> haskey(gdf, ("a",))
true

julia> haskey(gdf, ("z",))
false

Conclusions

In this post we discussed indexing of GroupedDataFrames. This concludes the basic tutorial of working with these data structures.
I hope you will find the functionalities I have covered useful in your work.