Documentation serves multiple purposes and may be useful for various audiences, including your future self, collaborators, users and contributors – should you aim at packaging some of your code into a general-purpose library.
Do not hesitate to refactor your code regularly and remove dead code to prevent confusion for yourself and others.
Comments
In-line comments should be used sparingly. Aim to write self-explanatory code instead. Use comments to provide context not apparent from the code itself, such as references to papers, Stack Overflow topics, or TODOs.
Use single-line comments for brief explanations and multi-line comments for more detailed information.
julia
#=
This is a multi-line
comment
=#
python
"""
This is a multi-line
comment
"""
Tip: use vscode rewrap comment/text to nicely format multiline comments.
On top of nicely formatting your code and appending comments where necessary, a literal documentation greatly facilitates the maintenance, understandability and reproducibility of your code.
Literal documentation
Literal documentation helps users understand your tool and get started with it.
README
A README file is essential for any research repository. It is displayed on under the code structure when accessing a GitHub repo.
It should contain:
API documentation describes the usage of functions, classes (types) and modules (packages). Parsers usually support markdown styles, which also enhances raw readability for humans. In short, markdown styles consists in using
defbest_function_ever(a_param,another_parameter):"""
this is the docstring
"""# do some stuff
But above the function or type definition in Julia
"""
this is the docstring
"""functionbest_function_ever(a_param,another_parameter)# do some stuffend"Tell whether there are too foo items in the array."foo(xs::Array)=...
Best practice for docstrings include
(in Julia: insert the signature of your function )
Short description
Arguments (Args, Input,…)
Returns
Examples
Several flavours may be used, even for a single language.
defadd(a,b):"""
Add two integers.
This function takes two integer arguments and returns their sum.
# Parameters:
a: The first integer to be added.
b: The second integer to be added.
# Return:
int: The sum of the two integers.
# Raise:
TypeError: If either of the arguments is not an integer.
Examples:
>>> add(2, 3)
5
>>> add(-1, 1)
0
>>> add('a', 1)
Traceback (most recent call last):
...
TypeError: Both arguments must be integers.
"""ifnotisinstance(a,int)ornotisinstance(b,int):raiseTypeError("Both arguments must be integers")returna+b
julia
"""
add(a, b)
Adds two integers.
This function takes two integer arguments and returns their sum.
# Arguments
- `a`: The first integer to be added.
- `b`: The second integer to be added.
# Returns
- The sum of the two integers.
# Examples
```julia-repl
julia> add(2, 3)
5
julia> add(-1, 1)
0
```
"""functionadd(a,b)returna+bend
You may use tools like Documenter.jl or Sphinx to automatically render your API documentation on a website. Github actions can automatize the process of building the documentation for you, similarly to how it can automate testing.
Docstrings may be accompanied by typing.
Type annotations
Typing refers to the specification of variable types and function return types within a programming language. It helps define what kind of data a function or variable can handle, ensuring type safety and reducing runtime errors. It
clearly indicates the expected input and output types, making the code easier to understand.
helps catch type-related errors early in the development process.
encourages consistent usage of types throughout the codebase.
python
defadd(a:int,b:int)->int:returna+b
In Python, using typing does not enforce type checking at runtime! You may use decorators to enforce it.
julia
functionadd(a::Int,b::Int)returna+bend
In Julia, types are enforced at runtime! Type annotations help the Julia compiler optimize performance by making type inferences easier.
Consider raising errors
We do not like reading manuals. But we are foreced to read error messages. Use assertions and error messages to handle unexpected inputs and guide users.
python
assert: When an assert doesn’t pass, it raises an AssertionError. You can optionally add an error message at the end.
NotImplementedError, ValueError, NameError: Commonly used, generic errors you can raise. I probably overuse NotImplementedError compared to other types.
defconvolve_vectors(vec1,vec2):ifnotisinstance(vec1,list)ornotisinstance(vec2,list):raiseValueError("Both inputs must be lists.")# convolve the vectors
Tutorials
Create tutorial Jupyter notebooks or vignettes in R to demonstrate the usage of your code. Those can be placed in a folder examples or tutorials. Format them as e.g.
vignettes in R,
or using Jupyter notebooks, which are the perfect format for tutorials
Accessing documentation
julia
?cos?@time?r""
python
help(myfun)
But e.g. VSCode can be also quite helpful, and this works also with your own code!
Doc testing
Doc testing, or doctest, allows you to test your code by running examples embedded in the documentation (docstrings). It compares the output of the examples with the expected results given in the docstrings, ensuring the code works as documented.
Why doc testing?
Ensures that the code examples in your documentation are accurate and up-to-date.
Simple to write and understand, making it accessible for both writing and reading tests.
Promotes writing comprehensive docstrings which enhance code readability and maintainability.
In this post, I explore the benefits and drawbacks of using empirical (ML)-based models versus mechanistic models for predicting ecosystem responses to perturbations. To evaluate these different modelling approaches, I use a mechanistic ecosystem model to generate a synthetic time series dataset.
By applying both modelling approaches to this dataset, I can evaluate their performance. While the ML-based approach yields accurate forecasts under unperturbed dynamics, it inevitably fails when it comes to predicting ecosystem response to perturbations. On the other hand, the mechanistic model, which is simplified version of the ground truth model to reflect a realistic scenario, is inaccurate and cannot forecast, but provides a more adequate approach to predict ecosystem response to unobserved scenarios.
To improve the accuracy of mechanistic models, I introduce inverse modelling, and in particular an approach that I have developed called piecewise inference. This approach allows to accurately calibrate complex mechanistic models, and doing so open doors to improve our understanding ecosystems by performing model selection.
Finally, I discuss how hybrid models, which incorporate both ML-based and mechanistic components, offer the potential to benefit from the strengths of both modelling approaches. By examining the strengths and limitations of these different modelling approaches, I hope to provide insights into how best to use them to advance our knowledge of ecological and evolutionary dynamics.
Notes
This post is under construction, and contain typos! If you find some, please contact me so that I can correct
For the sake of clarity, some pieces of code have voluntarily been hidden in external Julia files, which are loaded throughout the post. If you want to inspect them, check out those files in the corresponding GitHub repository
Generating a synthetic dataset
To generate the synthetic dataset, I consider a 3 species ecosystem model, composed of a resource, consumer and prey species. The resource growth rate depends on water availability. Here is a simplified version of the dynamics
To implement the model, I use the library EcoEvoModelZoo, which provides to the user ready-to-use mechanistic eco-evolutionary models. I use the model type SimpleEcosystemModel, cf. documentation of EcoEvoModelZoo. Let’s first construct the trophic network, and plot it
usingEcoEvoModelZooN=3# number of speciespos=Dict(1=>[0,0],2=>[0.2,1],3=>[0,2])labs=Dict(1=>"Resource",2=>"Consumer",3=>"Prey")foodweb=DiGraph(N)add_edge!(foodweb,2=>1)# Consumer (node 2) feeds on Resource (node 1)add_edge!(foodweb,3=>2)# Predator (node 3) fonds on consumer (node 2)println(foodweb)
(<py Figure size 640x480 with 1 Axes>, <py Axes: >)
Then, I implement the processes that drive the dynamics of the ecoystem. Those include resource limitation for the resource species (e.g. limitation in nutrients), intraspecific competition for the resource species, reproduction, and feeding interactions (grazing and predation). To better understand this piece of code, you may want to refer to one of my previous blog post, “Inverse ecosystem modelling made easy with PiecewiseInference.jl”.
resource_conversion_efficiency (generic function with 1 method)
Functional responses
The feeding processes implemented are based on a functional response of type II. The attack rates q define the slope of the functional response, while the handling times H define the saturation of this response.
Now we define the numerical values for the parameter of the ecosystem model.
p_true=ComponentArray(H₂₁=Float32[1.24],# handling timesH₃₂=Float32[2.5],q₂₁=Float32[4.98],# attack ratesq₃₂=Float32[0.8],r=Float32[1.0,-0.4,-0.08],# growth ratesK₁₁=Float32[1.0],# carrying capacity for the resourceA₁₁=Float32[1.0])# competition for the resource
And with that, we can plot how we implemented the dependence between the resource growth rate and the water availability. It is a gaussian dependence, where the resource growth rate is maximum at a certain value of water availability. The intuition is that too much or too little water is detrimental to the resource.
And let’s generate a dataset by contaminating the model output with lognormally distributed noise.
data=simulate(model)|>Array# contaminating raw data with noisedata=data.*exp.(1f-1*randn(Float32,size(data)))# plottingfig,ax=subplots(figsize=(7,4))foriin1:size(data,1)ax.scatter(tsteps,data[i,:],label=labels_sp[i],color=species_colors[i],s=10.)end# ax.set_yscale("log")ax.set_ylabel("Species abundance")ax.set_xlabel("Time (days)")fig.set_facecolor("None")ax.set_facecolor("None")fig.legend()display(fig)
Empirical modelling
Let’s build a ML-model, and train the model on the time series.
We’ll use a recurrent neural network model
More specifically, we use a Long Short Term Memory cell, connected to two dense layers with relu and a radial basis (rbf) activation functions.
usingFlux# Julia deep learning libraryinclude("rnn.jl")# Load some utility functionsargs=ArgsEco()# Set up hyperparametersrbf(x)=exp.(-(x.^2))# custom activation functionhls=64# hidden layer size# Definition of the RNN# our model takes in the ecosystem state variables, together with current water availabilityrnn_model=Flux.Chain(LSTM(N+1,hls),Flux.Dense(hls,hls,relu),Flux.Dense(hls,N,rbf))@timetrain_model!(rnn_model,data,args)# Train and output model
75.555727 seconds (137.67 M allocations: 50.030 GiB, 6.62% gc time, 37.42%
compilation time)
We can now simulate our trained model in an autoregressive manner, which allows us to forecast over time steps beyond the training dataset.
Now let’s see what happens if the resource species go extinct. Because this species is at the bottom of the trophic chain, we expect a collapse of the ecosystem.
There is clearly a problem! The RNN model outputs almost unchanged dynamics, predicting that magically, the resource would revive. This were to be expected: the ML model did not see this type of collapse dynamics, so it cannot invent it.
In summary, ML-based model
👍 Very good for interpolation
👍 Does not require any knowledge from the system, only data!
👎 Cannot extrapolate to unobserved scenarios
Mechanistic modelling
We now define a new mechanistic model, deliberatively falsifying the numerical value of the parameters compared to the baseline ecosystem model, and simplifying the resource dependence on water availbility by assuming that its growth rate is constant. The resulting model_mechanistic is as such an inaccurate representation of the baseline ecosystem model.
The dynamics looks quite different from the time series, and cannot provide any realistic forecast of the ecosystem state in the future. Yet, it does capture some of the dynamics, since it reproduces an oscillatory behavior. As such, we can assume that this ecosystem model captures some of the processes driving the baseline ecosystem dynamics.
We can now use this mechanistic model to again try to understand what happens if the resource species go extinct.
That makes much more sense!!! The model does predict a collapse, and we can even estimate how long it will take for the system to fully collapse.
In summary, mechanistic models
👍 Very good for extrapolating to novel scenarios
👎 Hard to parametrise
👎 Inacurate
Now let’s see how we can improve this ecosystem model, by making better use of the dataset that we have at hand.
Inverse modelling
Use the observed data to infer the parameters of a model
There are two broad classes of methods to perform inverse modelling:
Bayesian inference
provide uncertainties estimations
does not scale well with the number of parameters to explore
Variational optimization
Does not suffer from the curse of dimensionality
Convergence to local minima
Need for parameter sensitivity
To better understand the caveats of the variational optimization approach, let’s further explain it. This method consists in defining a certain loss function, which allows to measure a distance between the model output and the observations:
Variational optimization methods seek to minimize $L$ by using its gradient (sensitivity) with respect to the model parameters. This gradient indicates how to update the parameters to reduce the distance between the observations and the simulation outputs.
This can be done iteratively until finding the parameters that minimize the loss, as illustrated below.
This works very well! In theory.
In practice, the landscape is rugged, as in the picture below
By following the gradient in such a landscape, variational optimization methods tend to get stuck in wrong regions of the parameter space, and provide false estimate.
PiecewiseInference.jl
PiecewiseInference.jl is a julia package that I have authored, which implements a method to smoothen the loss landscape, and that permits to automatically obtain the model parameter sensitivity. It is detailed in the following preprint
Boussange, V., Vilimelis-Aceituno, P., Pellissier, L., Mini-batching ecological data to improve ecosystem models with machine learning. bioRxiv (2022), 46 pages.
⚠️ We will shortly rename this preprint in a revised version, as the term “mini batching” is confusing. We now prefer the term “partitioning”
The method works by training the model on small chunks of data
Let’s use it to tune the paramters of our mechanitic model. We’ll group data points in groups of 11, indicated by the argument group_size = 11
usingPiecewiseInferenceinclude("utils.jl");# some utility functions defining `inference_problem_args` and `piecewise_MLE_args`loss_likelihood(data,pred,rg)=sum((log.(data).-log.(pred)).^2)infprob=InferenceProblem(model_mechanistic,p_mech;loss_likelihood,inference_problem_args...);@timeres_mech=piecewise_MLE(infprob;group_size=11,data=data,tsteps=tsteps,optimizers=[Adam(1e-2)],epochs=[500],piecewise_MLE_args...)
piecewise_MLE with 101 points and 10 groups.
Current loss after 50 iterations: 14.972400665283203
Current loss after 100 iterations: 12.230918884277344
Current loss after 150 iterations: 11.497014045715332
Current loss after 200 iterations: 11.264657020568848
Current loss after 250 iterations: 11.19422721862793
Current loss after 300 iterations: 11.16870403289795
Current loss after 350 iterations: 11.156006813049316
Current loss after 400 iterations: 11.14731216430664
Current loss after 450 iterations: 11.141851425170898
Current loss after 500 iterations: 11.138410568237305
166.285066 seconds (1.17 G allocations: 177.010 GiB, 10.44% gc time, 27.02%
compilation time: 0% of which was recompilation)
`InferenceResult` with model SimpleEcosystemModel
Now let’s plot what does this trained model predicts
Looks much better! The model now captures doubling period oscillations.
In summary, mechanistic models
👍 Very good for extrapolating to novel scenarios
👎 Hard to parametrise
👎 Inacurate
Model selection
To improve the accuracy, one can try to formulate different models corresponding to alternative hypotheses about the processes driving the ecosystem dynamics. For instance, we could compare the performance of this model to an alternative model, which would capture some sort of dependence between the water availability and the resource growth rate.
If we find that the refined model performs better than the model with constant growth rate, we have learnt that water availability is an important driver that affects the ecosystem dynamics.
Hybrid models
Assume that we have no idea on what the dependence of the resource growth rate on water availability may look like.
To proceed, we can define a very generic parametric function that can capture any sort of dependence, and then try to learn the parameters of this function from the data.
Neural networks are parametric functions that are highly suited for this sort of task. So we’ll build a hyrbid model, which contains the mechanistic components of the previous model, but where the resource growth rate is parametrized by a neural network.
🤖 + 🔬= 🤯
Below, we define the neural network, and the resource growth rate based on this neural net.
usingLuxrng=Random.default_rng()# Multilayer FeedForwardhlsize=5neural_net=Lux.Chain(Lux.Dense(1,hlsize,rbf),Lux.Dense(hlsize,hlsize,rbf),Lux.Dense(hlsize,hlsize,rbf),Lux.Dense(hlsize,1))# Get the initial parameters and state variables of the modelp_nn,st=Lux.setup(rng,neural_net)p_nn=p_nn|>ComponentArraygrowth_rate_resource_nn(p_nn,water)=neural_net([water],p_nn,st)[1]functionhybrid_growth_rate(p,t)return[growth_rate_resource_nn(p.p_nn,water_availability(t));p.r]end
hybrid_growth_rate (generic function with 1 method)
Now we define our new hybrid model that implements this hybrid_growth_rate function
piecewise_MLE with 101 points and 10 groups.
Current loss after 50 iterations: 5.201906204223633
Current loss after 100 iterations: 3.6290767192840576
Current loss after 150 iterations: 3.519521713256836
Current loss after 200 iterations: 3.4955437183380127
Current loss after 250 iterations: 3.4787349700927734
Current loss after 300 iterations: 3.5343759059906006
Current loss after 350 iterations: 3.4698166847229004
Current loss after 400 iterations: 3.4563584327697754
Current loss after 450 iterations: 3.5980613231658936
Current loss after 500 iterations: 3.4506189823150635
`InferenceResult` with model SimpleEcosystemModel
The loss has signficiantly reduced. This means that this model better explains the dynamics, and as such, we have discovered that water avilability is indeed an important driver of ecosystem dynamics! Let’s plot the simulation output of this hyrbid model
The cool thing is that although it contains a fully parametric component (the neural network), this hybrid can still extrapolate, because it is constrained by mechanistic processes.
And what’s even cooler is that by interpreting the neural network, we can actually learn a new process: the shape of the dependence between the resource growth rate and the water availability!
Mechanistic ecosystem models permit to quantiatively describe how population, species or communities grow, interact and evolve. Yet calibrating them to fit real-world data is a daunting task. That’s why I’m excited to introduce PiecewiseInference.jl, a new Julia package that provides a user-friendly and efficient framework for inverse ecosystem modeling. In this blog post, I will guide you through the main features of PiecewiseInference.jl and provide a step-by-step tutorial on how to use it with a three-compartment ecosystem model. Whether you’re a quantitative ecologist or a curious data scientist, I hope this post will encourage you to join the effort and use and develop inverse ecosystem modelling methods to improve our understanding and predictions of ecosystems.
Preliminary steps
This tutorial relies on three packages that I have authored but are (yet) not registered on the official Julia registry. Those are
PiecewiseInference,
EcoEvoModelZoo: a package which provides access to a collection of ecosystem models,
ParametricModels: a wrapper package to manipulate dynamical models. Specifically, ParametricModels avoids the hassle of specifying, at each time you want to simulate an ODE model, boring details such as the algorithm to solve it, the time span, etc…
To easily install them on your machine, you’ll have to add my personal registry by doing the following:
We use Graphs to create a directed graph to represent the food web to be considered The OrdinaryDiffEq package provides tools for solving ordinary differential equations, while the LinearAlgebra package is used for linear algebraic computations. The UnPack package provides a convenient way to extract fields from structures, and the ComponentArrays package is used to store and manipulate the model parameters conveniently. Finally, the PythonCall package is used to interface with Python’s Matplotlib library for visualization.
Definition of the forward model
Defining hyperparameters for the forward simulation of the model.
Next, we define the algorithm used for solving the ODE model. We also define the
absolute tolerance (abstol) and relative tolerance (reltol) for the solver. tspan is a tuple representing the time range we will simulate the system for,
and tsteps is a vector representing the times we want to output the simulated
data.
We’ll define a 3-compartment ecosystem as presented in McCann et al. (1994). We will use SimpleEcosystemModel from EcoEvoModeZoo.jl, which requires as input a foodweb structure. Let’s use a DiGraph to represent it.
N=3# number of compartmentfoodweb=DiGraph(N)add_edge!(foodweb,2=>1)# C to Radd_edge!(foodweb,3=>2)# P to C
true
The N variable specifies the number of
compartments in the model. The add_edge! function is used to add edges to the
graph, specifying the flow of resources between compartments.
For fun, let’s just plot the foodweb. Here we use the PythonCall and PythonPlot
packages to visualize the food web as a directed graph using networkx and numpy.
We create a color list for the different species, and then create a directed
graph g_nx with networkx using the adjacency matrix of the food web. We also
specify the position of each node in the graph, and use nx.draw to draw the
graph with
Now that we have defined the foodweb structure, we can build the ecosystem
model, which will be a SimpleEcosystemModel from EcoEvoModelZoo.
The next several functions are required by SimpleEcosystemModel and define the
specific dynamics of the model. The intinsic_growth_rate function specifies
the intrinsic growth rate of each compartment, while the carrying_capacity
function specifies the carrying capacity of each compartment. The competition
function specifies the competition between and within compartments, while the resource_conversion_efficiency function specifies the efficiency with which
resources are converted into consumer biomass. The feeding function specifies
the feeding interactions between compartments.
resource_conversion_efficiency (generic function with 1 method)
To define the feeding processes, we use adjacency_matrix to get the adjacency matrix of the food web. We then use findnz from SparseArrays to get the row and column indices of the non-zero entries in the adjacency matrix, which we store in I and J. Those are then used to generate sparse matrices required for defining the functional responses of each species considered. The sparse matrices’ non-zero coefficients are the model parameters to be fitted.
Defining the ecosystem model parameters for generating a dataset
The parameters for the ecosystem model are defined using a ComponentArray. The u0_true variable specifies the initial conditions for the simulation. The ModelParams type from the ParametricModels package is used to specify the
model parameters and simulation settings. Finally, the SimpleEcosystemModel
type from the EcoEvoModelZoo package is used to define the ecosystem model.
p_true=ComponentArray(H₂₁=[1.24],H₃₂=[2.5],q₂₁=[4.98],q₃₂=[0.8],r=[1.0,-0.4,-0.08],K₁₁=[1.0],A₁₁=[1.0])u0_true=[0.77,0.060,0.945]mp=ModelParams(;p=p_true,tspan,u0=u0_true,alg,reltol,abstol,saveat=tsteps,verbose=false,# suppresses warnings for maxitersmaxiters=50_000)model=SimpleEcosystemModel(;mp,intinsic_growth_rate,carrying_capacity,competition,resource_conversion_efficiency,feeding)
`Model` SimpleEcosystemModel
Let’s run the model to generate a dataset! There is nothing more simple than that. Let’s also plot it,
to get a sense of what it looks like.
Let’s add a bit of noise to the data to simulate experimental errors. We proceed by adding
log normally distributed noise, so that abundance are always positive (negative abundance would not make sense, but could happen when adding normally distributed noise!).
Now that we have set up our model and generated some data, we can proceed with the inverse modelling using PiecewiseInference.jl.
PiecewiseInference.jl allows to perform inversion based on a segmentation method that partitions the data into short time series (segments), each treated independently and matched against simulations of the model considered. The segmentation approach helps to avoid the ill-behaved loss functions that arise from the strong nonlinearities of ecosystem models, when formulation the inference problem. Note that during the inversion, not only the parameters are inferred, but also the initial conditions, which are necessary to simulate the ODE model.
Definition of the InferenceProblem
We first import the packages required for the inversion. PiecewiseInference is the
main package used, but we also need OptimizationFlux for the Adam optimizer,
and SciMLSensitivity to define the sensitivity method used to differentiate
the forward model.
To initialize the inversion, we set the initial values for the parameters in p_init to those of p_true but modify the H₂₁ parameter.
p_init=p_truep_init.H₂₁.=2.0#
1-element view(::Vector{Float64}, 1:1) with eltype Float64:
2.0
Next, we define a loss function loss_likelihood that compares the observed data
with the predicted data. Here, we use a simple mean-squared error loss function while log transforming the abundance, since the noise is log-normally distributed.
It is also handy to use a callback function, that will be called after each iteration of the optimization routine, for visualizing the progress of the inference. Here, we use it to track the loss value and plot the data against the model predictions.
info_per_its=50include("cb.jl")# defines the `plotting_fit` functionfunctioncallback(p_trained,losses,pred,ranges)iflength(losses)%info_per_its==0plotting_fit(losses,pred,ranges,data,tsteps)endend
callback (generic function with 1 method)
piecewise_MLE hyperparameters
To use piecewise_MLE, the main function of PiecewiseInference to estimate the parameters that fit the observed data, we need to decide on two critical hyperparameters
group_size: the number of data points that define an interval, or segment. This number is usually small, but should be decided upon the dynamics of the model: to more nonlinear is the model, the lower group_size should be. We set it here to 11
batch_size: the number of intervals, or segments, to consider on a single epoch. The higher the batch_size, the more computationally expensive a single iteration of piecewise_MLE, but the faster the convergence. Here, we set it to 5, but could increase it to 10, which is the total number of segments that we have.
Another critical parameter to be decided upon is the automatic differentiation backend used to differentiate the ODE model. Two are supported, Optimization.AutoForwardDiff() and Optimization.Autozygote(). Simply put, Optimization.AutoForwardDiff() is used for forward mode sensitivity analysis, while Optimization.Autozygote() is used for backward mode sensitivity analysis. For more information on those, please refer to the documentation of Optimization.jl.
Other parameters required by piecewise_MLE are
optimizers specifies the optimization algorithm to be used for each batch. We use the Adam optimizer, which is the go-to optimizer to train deep learning models. It has a learning rate parameter that controls the step size at each iteration. We have chosen a value of 1e-2 because it provides good convergence without causing numerical instability,
epochs specifies the number of epochs to be used for each batch. We chose a value of 500 because it is sufficient to achieve good convergence,
info_per_its specifies after how many iterations the callback function should be called
verbose_loss prints the value of the loss function during training,
piecewise_MLE with 100 points and 10 groups.
Current loss after 50 iterations: 36.745952018526495
Current loss after 100 iterations: 15.064711239454626
Current loss after 150 iterations: 9.993029255013324
Current loss after 200 iterations: 7.994491307947515
Current loss after 250 iterations: 6.500818892986831
Current loss after 300 iterations: 5.3892647156988565
Current loss after 350 iterations: 3.0351181646280514
Current loss after 400 iterations: 2.674445730720996
Current loss after 450 iterations: 3.1591980829795676
Current loss after 500 iterations: 2.4343376293865995
157.765049 seconds (1.70 G allocations: 154.827 GiB, 10.16% gc time, 31.31%
compilation time: 1% of which was recompilation)
`InferenceResult` with model SimpleEcosystemModel
Finally, we can examine the results of the inversion. We can look at the final parameters, and the initial conditions inferred for each segement:
# Some more codep_trained=res.p_trainedu0s_trained=res.u0s_trainedfunctionprint_param_values(p_trained,p_true)forkinkeys(p_trained)println(string(k))println("trained value = ");display(p_trained[k])println("true value =");display(p_true[k])endendprint_param_values(p_trained,p_true)
H₂₁
trained value =
1-element Vector{Float64}:
1.4786844814887716
true value =
1-element Vector{Float64}:
2.0
H₃₂
trained value =
1-element Vector{Float64}:
1.891238277791975
true value =
1-element Vector{Float64}:
2.5
q₂₁
trained value =
1-element Vector{Float64}:
4.550896291686214
true value =
1-element Vector{Float64}:
4.98
q₃₂
trained value =
1-element Vector{Float64}:
0.7250599871665505
true value =
1-element Vector{Float64}:
0.8
r
trained value =
3-element Vector{Float64}:
0.8705446490535288
-0.30124597815843823
-0.08241879418666838
true value =
3-element Vector{Float64}:
1.0
-0.4
-0.08
K₁₁
trained value =
1-element Vector{Float64}:
1.0397189294700315
true value =
1-element Vector{Float64}:
1.0
A₁₁
trained value =
1-element Vector{Float64}:
0.972947353012839
true value =
1-element Vector{Float64}:
1.0
Your turn to play!
You can try to change e.g. the batch_sizes and the group_size. How do those parameters influence the quality of the inversion?
Conclusion
PiecewiseInference.jl provides an efficient and flexible way to perform inference on complex ecological models, making use of automatic differentiation and optimizers traditionally used in Machine Learning. The segmentation method implemented in PiecewiseInference.jl regularizes the inference problem and enables inverse modelling of complex dynamical systems, for which standard methods would otherwise fail.
Furthermore, PiecewiseInference.jl together with EcoEvoModelZoo.jl offer a powerful toolkit for ecologists and evolutionary biologists to benchmark and validate models against data. The combination of theoretical modelling and data can provide new insights into complex ecological systems, helping us to better understand and predict the dynamics of biodiversity.
We invite users to explore these packages and contribute to their development, by adding new models to the EcoEvoModelZoo.jl and improve the features of PiecewiseInference.jl. With these tools, we can continue to push the boundaries of ecological modelling and make important strides towards a more sustainable future.
Appendix
You can find the corresponding tutorial as a .jmd file at https://github.com/vboussange/MyTutorials.
Please contact me, if you have found a mistake, or if you have any comment or suggestion on how to improve this tutorial.