Author Archives: Dean Markwick's Blog -- Julia

Fitting Price Impact Models

By: Dean Markwick's Blog -- Julia

Re-posted from: https://dm13450.github.io/2025/03/14/Fitting-Price-Impact-Models.html

A big part of market microstructure is price impact and understanding how you move the market every time you trade. In the simplest sense, every trade upends the supply and demand of an asset even for a tiny amount of time. The market responds to this change, then responds to the response, then responds to that response, etc. You get the idea. It’s a cascading effect of interactions between all the people in the market.


Enjoy these types of posts? Then you should sign up for my newsletter.


Price impact is happening both at the micro and macro level. At the micro level each trade moves the market a little bit based on the instantaneous market conditions commonly called ‘liquidity’. At the macro level, continuous trades in one direction have a compounding and overlapping effect. In reality, you can’t separate out either effect so the market impact models need to work for both small and large scales.

This post is inspired by two sources:

  1. Handbook of Price Impact Modelling – Chapter 7
  2. Stochastic Liquidity as a Proxy for Nonlinear Price Impact

Both cover very similar models but one is a fairly expensive
book and the other is on SSRN for free. The same author is involved in
both of them too.

In terms of data, there are two routes you can go down.

  1. You have your own, private, execution data and can build out a data set for the models.
  2. You use publicly available trades and adjust the models to account for the anonymous data.

In the first case, you will know when an execution started and stopped so can record how the price changed. In the second case, the data will be made up of lots of trades and less obvious when some parent execution started and stopped.

We will take the 2nd route and using Bitcoin data to look at different price impact models.

As ever I will be using Julia with some of the standard packages.

using LibPQ
using DataFrames, DataFramesMeta
using Dates
using Plots
using GLM, Statistics, Optim

Bitcoin Price Impact Data

We will use my old trusty Bitcoin data set that I collected
in 2021. It’s just over a day’s worth of Bitcoin trades and L1 prices
that I piped into QuestDB. Full detail in Using QuestDB to Build a Crypto Trade Database in Julia.

First, we connect to the database.

conn = LibPQ.Connection("""
             dbname=qdb
             host=127.0.0.1
             password=quest
             port=8812
             user=admin""");

For each trade recorded in the database, we also want to join the best bid and offer immediately before it. This is where an ASOF join is useful. It joins two tables with timestamps using the entry of the 2nd table with time before the first table row. Sounds more complicated than it really is. In short, it takes the trade table and adds in the prices using the price just before the trade.

trades = execute(conn, 
    "WITH
trades AS ( 
   SELECT * FROM coinbase_trades
   ),
prices as (
  select * from coinbase_bbo
)
select * from trades ASOF JOIN prices") |> DataFrame
dropmissing!(trades);
trades = @transform(trades, :mid = 0.5*(:ask .+ :bid))

For these small tables, it calculates pretty much instantly and we are
able to return a Julia data frame. Plus we calculate the mid-price for each row.

In all the price impact models we are aggregating this data:

  1. Group the data by some time bucket (seconds or minutes etc.)
  2. Calculate the net amount, total absolute amount and open and close prices of the bucket.
  3. Calculate the price return using the close-to-close prices.
function aggregate_data(trades, smp)
    tradesAgg = @combine(groupby(@transform(trades, :ts = floor.(:timestamp, smp)), :ts), 
             :q = sum(:size .* :side), 
             :absq = sum(:size), 
             :o = first(:mid), 
             :c = last(:mid));
    tradesAgg[!, "price_return"] .= [NaN; (tradesAgg.c[2:end]./ tradesAgg.c[1:(end-1)]) .- 1]
    tradesAgg[!, "ofi"] .= tradesAgg.q ./ tradesAgg.absq

    tradesAgg
end

We are going to bucket the data by 10 seconds.

aggData  = aggregate_data(trades, Dates.Second(10))

As ever, let’s split this data into a training and test set.

aggDataTrain = aggData[1:7500, :]
aggDataTest = aggData[7501:end, :];

It’s just a simple split on time.

plot(aggDataTrain.ts, aggDataTrain.c, label = "Train")
plot!(aggDataTest.ts, aggDataTest.c, label = "Test")

Calculating the Volatility and ADV

All the models require a volatility and ADV calculation. My data runs just over a day, so need to adjust for that.

For the ADV we take the sum of the total volume traded and divide by the length of time converted to days.

deltaT = maximum(trades.timestamp) - minimum(trades.timestamp)
deltaTDays = (deltaT.value * 1e-3)/(24*60*60)
adv = sum(trades.size)/deltaTDays
aggDataTrain[!, "ADV"] .= adv
aggDataTest[!, "ADV"] .= adv;

For the volatility, we take the square root of the sum of the 5-minute return squared. Should probably be annualised if we were comparing the parameters across different assets.

min5Agg = aggregate_data(trades, Dates.Minute(5))
volatility = sqrt(sum(min5Agg.price_return[2:end] .* min5Agg.price_return[2:end]))
aggDataTrain[!, "Vol"] .= volatility;
aggDataTest[!, "Vol"] .= volatility;

The ADV and volatility have a normalising effect across assets. So if we had multiple coins, we could use the same model even if one was a highly traded coin like BTC or ETH vs a lower volume coin (the rest of them?!). This would give us comparable model parameters to judge the impact effect.

As our data sample is so small we are only calculating 1 volatility and 1 ADV. In reality, you calculate the volatility/ADV on a rolling basis and then do the train/test split.

Models of Market Impact

The paper and book describe different market impact models that all follow a similar functional form. I’ve chosen four of them to illustrate the model fitting process.

  • The Order Flow Imbalance model (OFI)
  • The Obizhaeva-Wang (OW) model
  • The Concave Propagator model
  • The Reduced Form model

For all the models we will state the form of the market impact
ΔI and use the price returns over the same period to find
the best parameters of the model.

The overarching idea is that the return in each bucket is proportional
to the amount of volume traded in that bucket plus some
contribution from the previous volumes earlier – suitably decayed.

Order Flow Imbalance

This is the simplest model as it just uses the imbalance over the
bucket to predict return. For the OFI we are just using the trade
imbalance, the net volume divided by the total volume in the bucket.

ΔI=λσqt|qt|ADV

As there is no dependence on the previous returns, we can use simple linear regression to estimate $\lambda$.

aggDataTrain[!, "x_ofi"] = aggDataTrain.Vol .* (aggDataTrain.ofi ./ aggDataTrain.ADV)
aggDataTest[!, "x_ofi"] = aggDataTest.Vol .* (aggDataTest.ofi ./ aggDataTest.ADV)

ofiModel = lm(@formula(price_return ~ x_ofi + 0), aggDataTrain[2:end, :])

The model has returned a significant value of λ=59 and has an in sample R2 of 11% and our of sample RMSE of 0.0003. Encouraging and off to a good start!

Side note, I’ve written about Order Flow Imbalance before in Order Flow Imbalance – A High Frequency Trading Signal.

The Obizhaeva-Wang (OW) Model

The OW model is a foundational model of market impact and you will see this model frequently across different microstructure papers. It suggests a linear dependence between the signed order flow and price impact but again normalising against the ADV and volatility.

ΔI=βIt+λσqtADV

Again, we create the x variable in the data frame specific for this model but this will need special attention to fit.

aggDataTrain[!, "x_ow"] = aggDataTrain.Vol .* (aggDataTrain.q ./ aggDataTrain.ADV);
aggDataTest[!, "x_ow"] = aggDataTest.Vol .* (aggDataTest.q ./ aggDataTest.ADV);

From the market impact formula, we can see that the relationship is
recursive. The impact at time t depends on the impact at time
t1. How much of the previous impact is carried over is controlled
by β and in the paper they fix this at log2β=60 Minutes. This means we have to fit the model as:

  1. Calculate the I given an estimate of λ
  2. Adjust the price returns by this impact
  3. Regress the adjusted price returns against the x variable.
  4. Repeat with the new estimate of λ until converged.

This is a simple 1 parameter optimisation where we minimise the RMSE.

function calcImpact(x, beta, lambda)
    impact = zeros(length(x))
    impact[1] = x[1]
    for i in 2:length(impact)
        impact[i] = (1-beta)*impact[i-1] + lambda*x[i]
    end
    impact
end
	
function fitLambda(x, y, beta, lambda)
    I = calcImpact(x, beta, lambda)
    y2 = y .+ (beta .* I)
    model = lm(reshape(x, (length(x), 1))[2:end, :], y2[2:end])
    model
end

rmse(x) = sqrt(mean(residuals(x) .^2))

We start with λ=1 and let the optimiser do the work.

res = optimize(x -> rmse(fitLambda(aggDataTrain[!, "x_ow"], aggDataTrain[!, "price_return"], 0.01, x[1])), [1.0])

It’s converged! We plot the different values of the objective function and show that this process can find the minimum.

lambdaRes = rmse.(fitLambda.([aggDataTrain[!, "x_ow"]], [aggDataTrain[!, "price_return"]], 0.01, 0:1:20))
plot(0:1:20, lambdaRes, label = :none, xlabel = L"\lambda", ylabel = "RMSE", title = "OW Model")
vline!(Optim.minimizer(res), label = "Optimised Value")

We then pull out the best-fitting model and estimate the R2.
We have a nice convex relationship which is always a good sign.

owModel = fitLambda(aggDataTrain[!, "x_ow"], aggDataTrain[!, "price_return"], 0.01, first(Optim.minimizer(res)))

Which gives R2=11%. So roughly the same as the OFI model. For the out-of-sample RMSE we get 0.0006.

Concave Propagator Model

This model follows the belief that market impact is a power law and
that power is close to 0.5. Using the square root of the total amount
traded and the net direction gives us the x variable.

ΔI=βIt+λσsign(qt)|qt|ADV

aggDataTrain[!, "x_cp"] = aggDataTrain.Vol .* sign.(aggDataTrain.q) .* sqrt.((aggDataTrain.absq ./ aggDataTrain.ADV));
aggDataTest[!, "x_cp"] = aggDataTest.Vol .* sign.(aggDataTest.q) .* sqrt.((aggDataTest.absq ./ aggDataTest.ADV));

Again, we optimise using the same methodology as above.

res = optimize(x -> rmse(fitLambda(aggDataTrain[!, "x_cp"], aggDataTrain[!, "price_return"], 0.01, x[1])), [1.0])
lambdaRes = rmse.(fitLambda.([aggDataTrain[!, "x_cp"]], [aggDataTrain[!, "price_return"]], 0.01, 0:0.1:1))
plot(0:0.1:1, lambdaRes, label = :none, xlabel = L"\lambda", ylabel = "RMSE", title = "Concave Propagator Model")
vline!(Optim.minimizer(res), label = "Optimised Value")

Another success! This time the R2 is 17% so an improvement on the other two models. It’s out of sample RMSE is 0.0008.

Reduced Form Model

The paper suggests that as the number of trades and time increment
increases the market impact function converges to a linear form with a
dependence on the stochastic volatility of the order flow.

ΔI=βIt+λσqtvtADV

For this, we need to calculate the stochastic liquidity parameter, vt, which is simply the moving average of the absolute market volumes.

function calcLiquidity(absq, beta)
    v = zeros(length(absq))
    v[1] = absq[1]
    for i in 2:length(v)
        v[i] = (1-beta)*v[i-1] + absq[i]
    end
    return v
end

v = calcLiquidity(aggDataTrain[!, "absq"], 0.01)
vTest = calcLiquidity(aggDataTest[!, "absq"], 0.01)

plot(aggDataTrain.ts, v, label = "Stochastic Liquidity")
plot!(aggDataTest.ts, vTest, label = "Test Set")

Adding this into our data frame and calculating the x variable is simple.

aggDataTrain[!, "v"] = v
aggDataTest[!, "v"] = vTest

aggDataTrain[!, "x_rf"] = aggDataTrain.Vol .* aggDataTrain.q ./ sqrt.((aggDataTrain.ADV .* aggDataTrain[!, "v"]));
aggDataTest[!, "x_rf"] = aggDataTest.Vol .* aggDataTest.q ./
sqrt.((aggDataTest.ADV .* aggDataTest[!, "v"]));

And again, we repeat the fitting process.

lambdaVals = 0:0.1:5
res = optimize(x -> rmse(fitLambda(aggDataTrain[!, "x_rf"], aggDataTrain[!, "price_return"], 0.01, x[1])), [1.0])
lambdaRes = rmse.(fitLambda.([aggDataTrain[!, "x_rf"]], [aggDataTrain[!, "price_return"]], 0.01, lambdaVals))
plot(lambdaVals, lambdaRes, label = :none, xlabel = L"\lambda", ylabel = "RMSE", title = "Reduced Form Model")
vline!(Optim.minimizer(res), label = "Optimised Value")

This model gives an R2=10 and out-of-sample RMSE of 0.0009.

With all four models fitted, we can now look at the differences statistically and how the impact state evolves over the course of the day.

Model λ R2 OOS RMSE
OFI 43 0.11 0.0003
OW 14 0.11 0.0006
Concave Propagator 0.34 0.17 0.0008
Reduced Form 1.7 0.10 0.0009

So, the concave propagator model has the highest R2 followed by the reduced form model. The OFI and OW models have slightly lower R2.
But, looking at the RMSE values from the out-of-sample performance its
clear that the OFI model seems to be the best.

When we plot the resulting impacts from the 4 models we generally see
they agree, with only the OFI model being the most different. This
difference comes from the lack of time decay from the previous volumes.

Conclusion

Overall, I don’t think these results are that informative, my data set is tiny
compared to the paper (1 day vs months). Instead, use this as more of
an instructional on how to fit these models. We didn’t even explore
optimising the time decay (β values) for Bitcoin which could
be substantially different from the paper dataset on equities. So
there is plenty more to do!

Importance Sampling, Reinforcement Learning and Getting More From The Data You Have

By: Dean Markwick's Blog -- Julia

Re-posted from: https://dm13450.github.io/2024/12/17/Importance-Sampling-Reinforcement-Learning-and-Getting-More-From-The-Data-You-Have.html

A new paper hit my feed Choosing trading strategies in electronic execution using
importance sampling
. I’ve only encountered sampling as part of a statistical computing course as part of my PhD, and I had never strayed away from Monte Carlo sampling, but this practical example provided an intuitive understanding of its importance and utility.


Enjoy these types of posts? Then you should sign up for my newsletter.


The key tenet of the paper is to use the data you have to evaluate a strategy you are considering without actually running the new strategy in production. In real life, changing something like these strategies can take a long time, with limited upside but unlimited downside if it all goes wrong.

This blog post will run through the paper and replicate the main themes in Julia. I believe the author is a Julia user too, I remember enjoying their JuliaCon talk about high-frequency covariance matrices – HighFrequencyCovariance: Estimating Covariance Matrices in Julia and the associated Julia package HighFrequencyCovariance.jl

The Execution Traders Problem

You are an execution trader with access to 4 different broker algorithms (algos) to execute your trade. With each trade you need to choose an algo and measure the trade’s overall slippage – the price you paid vs the price at the start of the order. You want to chose the best algo to ensure each of your trades gets the best price.

How do you chose what one to use? Do you have enough data to decide what one is the best one? Is any one algo better than the other? These are all difficult questions to answer but with some data on how the algos performs you should be able to use the data to help inform your decision.

We are trying to maximise the performance of each trade by choosing the correct algo. Our trade is described by a variable x and each algo performs differently depending on x. The paper calls the performance ‘slippage’ but then tries to maximise the slippage which sounds weird to me – I always talk about minimising slippage! But that’s splitting hairs.

The performance of algo i is described by an analytical function with parameters αi,βi plus some noise that depends on the duration of the trade d and the volatility σ.

function expSlippage(x, alpha, beta)
   @. -alpha*(x - beta)^2 
end

function slippage(x, alpha, beta, d, sigma)
    expSlippage(x, alpha, beta) + rand(Normal(0, d*sigma/2))
end

The α’s and β’s are simple constants set in the paper.

alphas = [5,10,15,20]
betas = [0.2, 0.4, 0.6, 0.8]

x = collect(0:0.01:1)
p = plot(xlabel = "x", ylabel = "Expected Slippage")
for i in eachindex(alphas)
   plot!(p, x, expSlippage(x, alphas[i], betas[i]), label = "Algo " * string(i), lw = 2) 
end
p

Slippage functions

Here we can see where each algo is better for each x. In reality, this is impossible to know or it might not even exist.

We are going to devise a rule of when we will select each trading algo:

  • If x<0.5 then we will randomly select Strategy 1 62.5% of the time and the others 12.5% of the time.

  • If x>0.5 then Strategy 3 62.5% and the others 12.5%.

function tradingRule(x)
    if x < 0.5
        return [0.625, 0.125, 0.125, 0.125]
    else 
        return [0.125, 0.125, 0.625, 0.125]
    end
end

Julia’s vectorisation makes it easy to simulate going through multiple trades.

x = rand(Uniform(), 100)
d = rand(Uniform(), 100)
stratProbs = tradingRule.(x)
strat = rand.(Categorical.(stratProbs))
stratProb = getindex.(stratProbs, strat)
slippageVal = slippage.(x, alphas[strat], betas[strat], d, 5)

res = DataFrame(x=x, d=d, strat=strat, stratProb=stratProb, prob=stratProb, slippage=slippageVal)
first(res, 3)
x d strat stratProb prob slippage
0.0192748 0.95432 1 0.625 0.625 1.29969
0.0700494 0.930581 1 0.625 0.625 0.855019
0.925858 0.90087 3 0.625 0.625 -2.62943

This is our ‘production data’ for 100 random trades. The aim of the game is to understand how good our trading rules are rather than trying to estimate how good the individual algos are.

Does our rule above do better than just randomly choosing an algo? This is where we can use importance sampling to take the 100 trades and specially weight them to assess a new trading rule.

Importance Sampling

Importance sampling is about using observed probabilities q and observations of a variable with different probabilities p. In our case we want to calculate the expected slippage of a trading strategy given the observations we have of the current strategy.

E[Slippage]=1NiSlipageipi(New Strategy)qi(Current Strategy)

qi(Current Strategy) is equal to the stratProb column in the dataframe and pi is the probability we would have chosen the given algo under the new strategy.

For the importance sampling, we calculate the likelihood ratio using equal probabilities and then take the weighted average of the slippages.

res = @transform(res, :EqProb = 0.25)
res = @transform(res, :ratio = :EqProb ./ :stratProb)
@combine(res, :StratSlippage = mean(:slippage), :EqStratSlippage = mean(:slippage, Weights(:ratio)))
StratSlippage EqStratSlippage
-1.02243 -1.8774

The average slippage for the 100 trades is worse (more negative) that the current strategy. This suggests that randomly choosing would perform worse.

Then plotting the average slippage across the orders.

res = @transform(res, :StratSlipapgeRolling = cumsum(:slippage) ./collect(1:length(:slippage)))
res = @transform(res, :EqSlipapgeRolling = cumsum(:slippage .* :ratio) ./cumsum(:ratio))

plot(res.StratSlipapgeRolling, label = "Production", lw =2)
plot!(res.EqSlipapgeRolling, label = "Equal Weighted", lw =2)

Simple strategy slippage

The timeseries of the slippage shows that the equally weighted strategy is worse, so gives us confidence in the current strategy. When we observe a bad outcome the likelihood ratio weights that outcome based on how different the probability is from the production strategy.

How can we use importance sampling to build better strategies?

Easy Reinforcement Learning and Expected Slippage

Each trade is described by x. In this toy model that is just a number but in real life this could correspond to the size of the order, the asset, the time of day and any combination of variables. In the original paper they use the spread, volatility, order size relative to the ADV and duration as descriptive variables of a random dataset. I’m going to keep it simple and stick to x being just a single number.

We want to understand if a particular x means we should use algo i. For this, we need to build an ‘expected slippage’ model where we use the historical x values and outcomes of using algo i.

For the modelling part, we will use xgboost through MLJ.jl.

using MLJ
xgboostModel = @load XGBoostRegressor pkg=XGBoost verbosity = 0
xgboostmodel = xgboostModel(eval_metric=["rmse"]);

The inputs are x and an indicator of the chosen algo.

res2 = coerce(res[:,[:x, :strat, :slippage]], :strat=>Multiclass);

y, X = unpack(res2, ==(:slippage); rng=123);

encoder = ContinuousEncoder()
encMach = machine(encoder, X) |> fit!
X_encoded = MLJ.transform(encMach, X);

xgbMachine = machine(xgboostmodel, X_encoded, y)

evaluate!(xgbMachine,
          resampling=CV(nfolds = 6, shuffle=true),
          measures=[rmse, rsq],
          verbosity=0)

The overall regression gets an R2 of 0.5 on our 100 trade dataset – a decent model.

In this new simulation, we will fit the xgboost model on the trades to build up an expected slippage model with all the data we have so far. prepareData and fitSlippage transform the data and fit the model.

We will then use this model to predict the expected slippage (predictSlippage) for each algo and use that to selected what algo to use for a given trade.

function prepareData(x, strat, slippage)
    res = coerce(DataFrame(x=x, strat=strat, slippage=slippage), :strat=>Multiclass);
    y, X = unpack(res, ==(:slippage); rng=123);
    encoder = ContinuousEncoder()
    encMach = machine(encoder, X) |> fit!
    X_encoded = MLJ.transform(encMach, X);
    return X_encoded, y
end

function fitSlippage(x, strat, slippage, xgboostmodel)
    X_encoded, y = prepareData(x, strat, slippage)
    xgbMachine = machine(xgboostmodel, X_encoded, y)

    evaluate!(xgbMachine,
          resampling=CV(nfolds = 6, shuffle=true),
          measures=[rmse, rsq],
          verbosity=0)
    return (xgbMachine, encMach)
end

function predictSlippage(x, xgbMachine, encMachine)
    X_pred = DataFrame(x=x, strat = [1,2,3,4], slippage = NaN)
    X_pred = coerce(X_pred[:,[:x, :strat, :slippage]], :strat=>Multiclass)
    X_pred = MLJ.transform(encMach, X_pred)
    preds = MLJ.predict(xgbMachine, X_pred)
    return(preds)
end

function slippageToProb(preds)
    scores = exp.(preds) ./ sum(exp.(preds))
    p = ((0.9 .* scores) .+ 0.025) ./ sum((0.9 .* scores) .+ 0.025) 
    return p
end

The predicted slippage is then transformed into a probability using the softmax function (slippageToProb) which gives us a mapping of the real-valued estimated slippage onto a probability. We then sample which strategy to use from this probability. By adding an element of randomness into the algo selection we are making sure we can use the importance sampling framework to either change the model (xgboost to something else) or change how we build the probabilities (softmax to something else).

To simulate the problem we will start by randomly choosing a strategy for the first 200 runs. After this we will start using the xgboost regression model to predict the expected slippage of each strategy and use this to decide what strategy to use.

epsilon = 0.05
volatility = 5
N = 1000

x = zeros(N)
strat = zeros(N)
slippages = zeros(N)
d = zeros(N)
stratProb = zeros(N)

for i in 1:N
    xVal = rand(Uniform())
    dVal = rand(Uniform())

    if i > 200
        xgbMachine, encMachine = fitSlippage(x[1:i], strat[1:i], slippages[1:i], xgboostmodel)
        predCost = predictSlippage(xVal, xgbMachine, encMachine)
        stratProbs = slippageToProb(predCost)
    else
        stratProbs = [0.25, 0.25, 0.25, 0.25]
    end

    stratVal = rand(Categorical(stratProbs))
    slippageVal = slippage(xVal, alphas[stratVal], betas[stratVal], dVal, volatility)
    
    x[i] = xVal
    strat[i] = stratVal
    stratProb[i] = stratProbs[stratVal]
    slippages[i] = slippageVal
    d[i] = dVal
end

res = DataFrame(x=x, d=d, strat=strat, stratProb=stratProb, slippage=slippages)

Again, we output each strategy and the probability the strategy was used. We use the importance sampling approach to estimate the slippage for choosing an algo randomly to gives us a comparison to the xgboost method.

res = @transform(res, :EqProb = 0.25)
res = @transform(res, :EqRatio = :EqProb ./ :stratProb)
res = @transform(res, :StratSlipapgeRolling = cumsum(:slippage) ./collect(1:length(:slippage)))
res = @transform(res, :EqSlipapgeRolling = cumsum(:slippage .* :EqRatio) ./cumsum(:EqRatio));

plot(res.StratSlipapgeRolling[50:end], label = "Production")
plot!(res.EqSlipapgeRolling[50:end], label = "Equal Weighting")

model slippage

For the first 200 trades we are just selecting randomly, so no difference in performance. Then afterwards we can see the XGBoost model starts to outperform as it learns what algo is better for each x.
So whilst we have only run the XGBoost model in production it has shown it is doing better than random by using the importance sampling method.

Testing a New Model Without Running it in Production

The XGBoost model is doing well and out-performing an equal weighted model, but what if you wanted to change from XGBoost to something else? How can you build the case that this is something worth doing?

By constructing new probabilities of whether the strategy would be selected (new pi’s) and with the current strategy probabilities (qi’s) we can estimate the slippage of the new model without having to run any more trades.

With MLJ.jl we can create a new model and pass it into the functions to replicate running the strategy in production. This time we use a simple linear regression model with the same features. We run through the trades in the same order so there is no information leakage.

@load LinearRegressor pkg=MLJLinearModels

linreg = MLJLinearModels.LinearRegressor()

newProb = ones(N) * 0.25

for i in 1:(N-1)

    if i > 200
        linMachine, enchMachine = fitSlippage(res.x[1:i], res.strat[1:i], res.slippage[1:i], linreg)
        predSlippage = predictSlippage(res.x[i+1], linMachine, enchMachine)
        stratProbs = slippageToProb(predSlippage)
        newProbVal = stratProbs[Int(res.strat[i+1])]
        newProb[i] = newProbVal
    end
    
end

res[:, :LinearProb] = newProb

res = @transform(res, :LinearRatio = :LinearProb ./ :stratProb)
res = @transform(res, :LinearSlipapgeRolling = cumsum(:slippage .* :LinearRatio) ./cumsum(:LinearRatio))
plot(res.StratSlipapgeRolling[50:end], label = "Production")
plot!(res.EqSlipapgeRolling[50:end], label = "Equal Weighting")
plot!(res.LinearSlipapgeRolling[50:end], label = "Linear Model")

Linear regression strategy

Adding the linear regression decision rule to the data gives us a way of assessing this new model without having to run it directly in production. We can see that the linear model is better than XGBoost and also better than the equal weighting.

A simple bootstrap of taking the average slippage for each strategy a random amount of times provides the simplest performance measure.

bs = mapreduce(x-> @combine(res[sample(201:nrow(res), nrow(res)-200), :], 
              :StratSlippage = mean(:slippage), 
              :EqStratSlippage = mean(:slippage, Weights(:EqRatio)),
              :LinearStratSlippage = mean(:slippage, Weights(:LinearRatio))),
			  vcat, 1:1000);

@combine(groupby(stack(bs), :variable), :avg = mean(:value), :sd = std(:value))
variable avg sd
StratSlippage -1.55385 0.0967389
EqStratSlippage -1.59169 0.119028
LinearStratSlippage -1.52706 0.133231

As its a toy problem, nothing of significance between the models – but both models do better than the random allocation.

Conclusion

Importance sampling gives you a way of getting more out of the current data and strategy you are using. By weighting the observations in a new way you can get an idea whether a new strategy is worth it or not.
By rethinking you current setup you can easily add a bit of randomness into decisions and use the importance sampling framework going forward.

Alpha Capture and Acquired

By: Dean Markwick's Blog -- Julia

Re-posted from: https://dm13450.github.io/2024/09/19/Alpha-Capture-and-Acquired.html

People are never short of a trade idea. There is a whole industry of
researchers, salespeople and amateurs coming up with trading ideas and
making big calls on what stock will go up, what country will cut
interest rates and what the price of gold will do next. Alpha capture
is about systematically assessing ideas and working out who has
alpha and generates profitable ideas and who is just making it up as
they are going along.


Enjoy these types of posts? Then you should sign up for my newsletter.


Alpha capture started as a way of profiling a broker’s stock
recommendation. If you have 50 people recommending you 50 different
ideas, how do you know who is good? You’ll quickly run out of money if
you blindly follow all the recommendations that hit your
inbox. Instead, you need to profile each person’s idea and see
who on average can make good recommendations. Whoever is good at
picking stocks probably deserves more of your business.

It has since expanded that some hedge fund have internal desks that
are doing a similar analysis on their portfolio managers (PMs) to double
down on profitable bets and mitigate risks of all the PMs picking the
same stock. Picking stocks and managing a portfolio across many PMs
are two different skills and different departments at your modern
hedge fund.

A simple way to measure the alpha of a PM or broker recommendation
will be to see if the price of a stock they buy (or recommend) goes up
after the day they suggest it. Those with alpha would see their
picks move higher on a large enough sample and those without alpha
would average out to zero, some ideas would go higher, some ideas
lower, the net result being 0 alpha. If a PM has the opposite effect,
every stock they buy goes down they are a contrarian
indicator so take their idea and do the opposite!

Alpha capture markout graph

Alpha Capture Systems: Past, Present, and Future
Directions

goes through the history of alpha capture and is a good short read
that inspired this blog post.

Basic Alpha Capture

What if we wanted to try our own Alpha Capture? We need some stock recommendations and a way of calculating what happens to the price after the recommendation. This is where the Acquired podcast comes in.

Acquired logo

Acquired tells the stories and strategies of great companies (taken from their website). It’s a pretty popular podcast and each episode gets close to a million listeners. So this makes it an ideal Alpha Capture study – when they release an episode about a company does the stock price of that company go higher or lower on average?
If it were to go higher then each time an episode is released call your broker and go long the stock!

They aren’t explicitly recommending a stock by talking about
it, as they say in their intro. So it’s just a toy exercise to see if
there is any correlation between the stock price and the release date
of an episode.

To systematically test this we need to get a list of the episodes and calculate a ‘markout’ from each episode.

Collecting Podcast Data

The internet is a wonderful thing and each episode of Acquired is
available as a XML feed from transistor.fm. So doing some fun parsing
of XML I can get the full history of the podcast with each date
and title.

function parseEpisode(x)
  rawDate = first(simplevalue.(x[tag.(x) .== "pubDate"]))
  date = ZonedDateTime(rawDate, dateformat"eee, dd uuu yyyy HH:MM:ss z")

  Dict("title" => first(simplevalue.(x[tag.(x) .== "title"])),
       "date" =>date)
end

function parse_date(t)
   Date(string(split(t, "T")[1]))
end

url = "https://feeds.transistor.fm/acquired"

data = parse(Node, String(HTTP.get(url).body))

episodes = children(data[3][1])
filter!(x -> tag(x) == "item", episodes)
episodes = children.(episodes)

episodeData = parseEpisode.(episodes)

episodeFrame = vcat(DataFrame.(episodeData)...)
CSV.write("episodeRaw.csv", episodeFrame)

After writing the data to a CSV I need to somehow parse the episode
title into a stock ticker. This is a tricky task as the episode names
are human friendly not computer friendly. So time for our LLM
overlords to lend a hand a do the heavy lifting. I drop the CSV into
Perplexity and prompt it to add the relevant stock ticker to the
file. I then reread the CSV into my notebook.

episodeFrame = CSV.read("episodeTicker.csv", DataFrame)
episodeFrame.date = ZonedDateTime.(String.(episodeFrame.date), dateformat"yyyy-mm-ddTHH:MM:SS.sss-z")

vcat(first(@subset(episodeFrame, :stock_ticker .!= "-"), 4),
        last(@subset(episodeFrame, :stock_ticker .!= "-"), 4))
date
ZonedDateTime
title
String
stock_ticker
String15
sector_etf
String7
2024-03-17T17:54:00.400+07:00 Renaissance Technologies RNR PSI
2024-02-19T17:56:00.410+08:00 Hermès RMS.PA GXLU
2024-01-21T17:59:00.450+08:00 Novo Nordisk (Ozempic) NOVO-B.CO IHE
2023-11-26T16:24:00.250+08:00 Visa V IPAY
2018-09-23T18:28:00.550+07:00 Season 3, Episode 5: Alibaba BABA KWEB
2018-08-20T09:20:00.370+07:00 Season 3, Episode 3: The Sonos IPO SONO GAMR
2018-08-05T18:15:00.030+07:00 Season 3, Episode 2: The Xiaomi IPO XIACF KWEB
2018-07-16T21:40:00.560+07:00 Season 3, Episode 1: Tesla TSLA TSLA

It’s done an ok job. Most of the episodes seem to correspond to the
right ticker but we can see it has hallucinated the RenTech stock
ticker as RNR. RenTech is a private company, no stock ticker and
instead, Perplexity has decided the RNR (a reinsurance company) is the
correct stock ticker. So not 100% accurate. Still, it has saved me a
good chunk of time and we can move on to getting the stock price data.

We want to measure the average price move of a stock after an episode is released. If Acquired had stock-picking skill, you expect the price to increase after the release of an episode as they are generally speaking positively about the various companies.

So using AlpacaMarkets.jl we get the stock price for the days before and the days after the episode. As AlpacaMarkets only has US stock data then only some of the episodes end up with a full dataset.

What is a Markout?

We calculate the percentage change relative to the episode date and then aggregate all the stock tickers together.

Markout=ppepisode releasedpepisode released

Acquired is about great companies so they choose to speak favourably about a company, therefore I think it’s a reasonable assumption that we expect the stock price to increase after everyone gets round to listening to it.
So once we aggregate all the episodes we should hopefully have
enough data to decide if this is true.

function getStockData(stock, startDate)
  prices = AlpacaMarkets.stock_bars(stock, "1Day", startTime=startDate - Month(1), limit=10000)[1]
  prices.date .= startDate
  prices.t = parse_date.(prices.t)
  prices[:, [:t, :symbol, :vw, :date]]
end

function calcMarkout(data)
   arrivalInd = findlast(data.t .<= data.date)
   arrivalPrice = data[arrivalInd, :vw]
   data.arrivalPrice .= arrivalPrice
   data.ts = [x.value for x in (data.t .- data.date)]
   data.markout = 1e4*(data.vw .- data.arrivalPrice) ./ data.arrivalPrice
   data
end

res = []

for row in eachrow(episodeFrame)
    
    try 
        stockData = getStockData(row.stock_ticker, Date(row.date))
        stockData = calcMarkout(stockData)
        append!(res, [stockData])
    catch e
        println(row.stock_ticker)
    end
end

res = vcat(res...)

With the data pulled we now aggregate by each day before and after the episode.

markoutRes = @combine(groupby(res, :ts), :n = length(:markout), 
                                         :avgMarkout = mean(:markout),
                                         :devMarkout = std(:markout))
markoutRes = @transform(markoutRes, :errMarkout = :devMarkout ./sqrt.(:n))

Always need error bars as this data gets noisy.


markoutResSub = @subset(markoutRes, :ts .<= 60, :n .>= 10)
plot(markoutResSub.ts, markoutResSub.avgMarkout, yerr=markoutResSub.errMarkout, 
     xlabel = "Days", ylabel = "Markout", title = "Acquired Alpha Capture", label = :none)
hline!([0], ls = :dash, color = "grey", label = :none)
vline!([0], ls = :dash, color = "grey", label = :none)

Average markout

Not really a pattern. The majority of the error bars are intercepting zero after the podcast is released.
If you squint a little bit there seems to be a bit of a downward trend post-episode which would suggest they talk about a company at the peak of the stock price.

Beforehand there is a bit of positive momentum, again suggesting that
they release the podcast at the peak of the stock price. Now this is
even more of a stretch given there is only 1 podcast a month and it
takes more than 20 days to prepare an episode (I imagine!), so
more noise than signal.

markoutIndRes = @combine(groupby(res, [:symbol, :ts]), :n = length(:markout), 
                                         :avgMarkout = mean(:markout),
                                         :devMarkout = std(:markout))
markoutIndRes = @transform(markoutIndRes, :errMarkout = :devMarkout ./sqrt.(:n))

p = plot()
hline!(p, [0], ls = :dash, color = "grey", label = :none)
vline!(p, [0], ls = :dash, color = "grey", label = :none)
for sym in ["TSLA", "V", "META"]
   markoutResSub = sort(@subset(markoutIndRes, :symbol .== sym, :ts .<= 60, :n .>= 1), :ts)
    plot!(p, markoutResSub.ts, markoutResSub.avgMarkout, yerr=markoutResSub.errMarkout, 
     xlabel = "Days", ylabel = "Markout", title = "Acquired Alpha Capture", label = sym, lw =2) 
end
p

Individual markouts

When we pull out 3 examples of episodes we can see the randomness and specifically the volatility of TSLA here.

Conclusion

From this, we would not put any specific weight on the stock
performance after an episode is released. There doesn’t appear to be
any statistical pattern to exploit. No alpha means no alpha
capture. It is a nice exercise though and has hopefully explained the
concept of a markout.