By: Logan Kilpatrick
The IEEE just released their annual report on the top programming language which can be found here…
By: Logan Kilpatrick
The IEEE just released their annual report on the top programming language which can be found here…
By: Bradley Setzler
Re-posted from: http://juliaeconomics.com/2016/02/09/an-introduction-to-structural-econometrics-in-julia/
This tutorial is adapted from my Julia introductory lecture taught in the graduate course Practical Computing for Economists, Department of Economics, University of Chicago.
The tutorial is in 5 parts:
Perhaps the greatest obstacle to using Julia in the past has been the absence of an easy-to-install IDE. There used to be an IDE called Julia Studio which was as easy to use as the popular RStudio for R. Back then, you could install and run Julia + Julia Studio in 5mins, compared to the hours it could take to install Python and its basic packages and IDE. When Julia version 0.3.X was released, Julia Studio no longer worked, and I recommended the IJulia Notebook, which requires the installation of Python and IPython just to use Julia, so any argument that Julia is more convenient to install than Python was lost.
Now, with Julia version 0.4.X, Juno has provided an excellent IDE that comes pre-bundled with Julia for convenience, and you can install Julia + Juno IDE in 5mins. Here are some instructions to help you through the installation process:
Pkg.update() Pkg.add("Ipopt") Pkg.build("Ipopt") Pkg.add("JuMP") Pkg.build("JuMP") Pkg.add("GLM") Pkg.add("KernelEstimator")
To motivate our application, we consider a very simple economic model, which I have taught previously in the mathematical economics course for undergraduates at the University of Chicago. Although the model is analytically simple, the econometrics become sufficiently complicated to warrant the Method of Simulated Moments, so this serves us well as a teachable case.
Let denote consumption and denote leisure. Consider an agent who wishes to maximize Cobb-Douglas utility over consumption and leisure, that is,
.
where is the relative preference for consumption. The budget constraint is given by,
,
where is the wage observed in the data, is other income that is not observed in the data, and is the tax rate.
The agent’s problem is to maximize subject to the budget constraint. We assume that non-labor income is uncorrelated with the wage offer, so that . Although this assumption is a bit unrealistic, as we expect high-wage agents to also tend to have higher non-labor income, it helps keep the example simple. The model is also a bit contrived in that we treat the tax rate as unobservable, but this only makes our job more difficult.
The goal of the econometrician is to identify the model parameters and from the data and the assumed structure. In particular, the econometrician is interested in the policy-relevant parameter , where,
,
and denotes the demand for consumption. is the marginal propensity for an agent with wage to consume in response to the tax rate. is the population average marginal propensity to consume in response to the tax rate. Of course, we can solve the model analytically to find that and , where is the average wage, but we will show that the numerical methods achieve the correct answer even when we cannot solve the model.
The replication code for this section is available here.
To generate data that follows the above model, we first solve analytically for the demand functions for consumption and leisure. In particular, they are,
Thus, we need only draw values of and , as well as choose parameter values for and , in order to generate the values of and that agents in this model would choose. We implement this in Julia as follows:
####### Set Simulation Parameters ######### srand(123) # set the seed to ensure reproducibility N = 1000 # set number of agents in economy gamma = .5 # set Cobb-Douglas relative preference for consumption tau = .2 # set tax rate ####### Draw Income Data and Optimal Consumption and Leisure ######### epsilon = randn(N) # draw unobserved non-labor income wage = 10+randn(N) # draw observed wage consump = gamma*(1-tau)*wage + gamma*epsilon # Cobb-Douglas demand for c leisure = (1.0-gamma) + ((1.0-gamma)*epsilon)./((1.0-tau)*wage) # Cobb-Douglas demand for l
This code is relatively self-explanatory. Our parameter choices are , , , and . We draw the wage to have distribution , but this is arbitrary.
We combine the variables into a DataFrame, and export the data as a CSV file. In order to better understand the data, we also non-parametrically regress on , and plot the result with Gadfly. The Julia code is as follows:
####### Organize, Describe, and Export Data ######### using DataFrames using Gadfly df = DataFrame(consump=consump,leisure=leisure,wage=wage,epsilon=epsilon) # create data frame plot_c = plot(df,x=:wage,y=:consump,Geom.smooth(method=:loess)) # plot E[consump|wage] using Gadfly draw(SVG("plot_c.svg", 4inch, 4inch), plot_c) # export plot as SVG writetable("consump_leisure.csv",df) # export data as CSV
Again, the code is self-explanatory. The regression graph produced by the plot function is:
The replication code for this section is available here.
We now use constrained numerical optimization to generate optimal consumption and leisure data without analytically solving for the demand function. We begin by importing the data and the necessary packages:
####### Prepare for Numerical Optimization ######### using DataFrames using JuMP using Ipopt df = readtable("consump_leisure.csv") N = size(df)[1]
Using the JuMP syntax for non-linear modeling, first we define an empty model associated with the Ipopt solver, and then add values of and values of to the model:
m = Model(solver=IpoptSolver()) # define empty model solved by Ipopt algorithm @defVar(m, c[i=1:N] >= 0) # define positive consumption for each agent @defVar(m, 0 <= l[i=1:N] <= 1) # define leisure in [0,1] for each agent
This syntax is especially convenient, as it allows us to define vectors of parameters, each satisfying the natural inequality constraints. Next, we define the budget constraint, which also follows this convenient syntax:
@addConstraint(m, c[i=1:N] .== (1.0-t)*(1.0-l[i]).*w[i] + e[i] ) # each agent must satisfy the budget constraint
Finally, we define a scalar-valued objective function, which is the sum of each individual’s utility:
@setNLObjective(m, Max, sum{ g*log(c[i]) + (1-g)*log(l[i]) , i=1:N } ) # maximize the sum of utility across all agents
Notice that we can optimize one objective function instead of optimizing objective functions because the individual constrained maximization problems are independent across individuals, so the maximum of the sum is the sum of the maxima. Finally, we can apply the solver to this model and extract optimal consumption and leisure as follows:
status = solve(m) # run numerical optimization c_opt = getValue(c) # extract demand for c l_opt = getValue(l) # extract demand for l
To make sure it worked, we compare the consumption extracted from this numerical approach to the consumption we generated previously using the true demand functions:
cor(c_opt,array(df[:consump])) 0.9999999998435865
Thus, we consumption values produced by the numerically optimizer’s approximation to the demand for consumption are almost identical to those produced by the true demand for consumption. Putting it all together, we create a function that can solve for optimal consumption and leisure given any particular values of , , and :
function hh_constrained_opt(g,t,w,e) m = Model(solver=IpoptSolver()) # define empty model solved by Ipopt algorithm @defVar(m, c[i=1:N] >= 0) # define positive consumption for each agent @defVar(m, 0 <= l[i=1:N] <= 1) # define leisure in [0,1] for each agent @addConstraint(m, c[i=1:N] .== (1.0-t)*(1.0-l[i]).*w[i] + e[i] ) # each agent must satisfy the budget constraint @setNLObjective(m, Max, sum{ g*log(c[i]) + (1-g)*log(l[i]) , i=1:N } ) # maximize the sum of utility across all agents status = solve(m) # run numerical optimization c_opt = getValue(c) # extract demand for c l_opt = getValue(l) # extract demand for l demand = DataFrame(c_opt=c_opt,l_opt=l_opt) # return demand as DataFrame end hh_constrained_opt(gamma,tau,array(df[:wage]),array(df[:epsilon])) # verify that it works at the true values of gamma, tau, and epsilon
The replication codes for this section are available here.
We saw in the previous section that, for a given set of model parameters and and a given draw of for each , we have enough information to simulation and , for each . Denote these simulated values by and . With these, we can define the moments,
which is equal to zero under the model assumptions. A method of simulated moments (MSM) approach to estimate and is then,
where is a weighting matrix, which is only relevant when the number of moments is greater than the number of parameters, which is not true in our case, so can be ignored and the method of simulated moments simplifies to,
Assuming we know the distribution of , we can simply draw many values of for each , and average the moments together across all of the draws of . This is Monte Carlo numerical integration. In Julia, we can create this objective function with a random draw of as follows:
function sim_moments(params) this_epsilon = randn(N) # draw random epsilon ggamma,ttau = params # extract gamma and tau from vector this_demand = hh_constrained_opt(ggamma,ttau,array(df[:wage]),this_epsilon) # obtain demand for c and l c_moment = mean( this_demand[:c_opt] ) - mean( df[:consump] ) # compute empirical moment for c l_moment = mean( this_demand[:l_opt] ) - mean( df[:leisure] ) # compute empirical moment for l [c_moment,l_moment] # return vector of moments end
In order to estimate , we need to run sim_moments(params) many times and take the unweighted average across them to achieve the expectation across . Because each calculation is computer-intensive, it makes sense to compute the contribution of for each draw of on a different processor and then average across them.
Previously, I presented a convenient approach for parallelization in Julia. The idea is to initialize processors with the addprocs() function in an “outer” script, then import all of the needed data and functions to all of the different processors with the require() function applied to an “inner” script, where the needed data and functions are already managed by the inner script. This is incredibly easy and much simpler than the manual spawn-and-fetch approaches suggested by Julia’s official documentation.
In order to implement the parallelized method of simulated moments, the function hh_constrained_opt() and sim_moments() are stored in a file called est_msm_inner.jl. The following code defines the parallelized MSM and then minimizes the MSM objective using the optimize command set to use the Nelder-Mead algorithm from the Optim package:
####### Prepare for Parallelization ######### addprocs(3) # Adds 3 processors in parallel (the first is added by default) print(nprocs()) # Now there are 4 active processors require("est_msm_inner.jl") # This distributes functions and data to all active processors ####### Define Sum of Squared Residuals in Parallel ######### function parallel_moments(params) params = exp(params)./(1.0+exp(params)) # rescale parameters to be in [0,1] results = @parallel (hcat) for i=1:numReps sim_moments(params) end avg_c_moment = mean(results[1,:]) avg_l_moment = mean(results[2,:]) SSR = avg_c_moment^2 + avg_l_moment^2 end ####### Minimize Sum of Squared Residuals in Parallel ######### using Optim function MSM() out = optimize(parallel_moments,[0.,0.],method=:nelder_mead,ftol=1e-8) println(out) # verify convergence exp(out.minimum)./(1.0+exp(out.minimum)) # return results in rescaled units end
Parallelization is performed by the @parallel macro, and the results are horizontally concatenated from the various processors by the hcat command. The key tuning parameter here is numReps, which is the number of draws of to use in the Monte Carlo numerical integration. Because this example is so simple, a small number of repetitions is sufficient, while a larger number would be needed if entered the model in a more complicated manner. The process is run as follows and requires 268 seconds to run on my Macbook Air:
numReps = 12 # set number of times to simulate epsilon gamma_MSM, tau_MSM = MSM() # Perform MSM gamma_MSM 0.49994494921381816 tau_MSM 0.19992279518894465
Finally, given the MSM estimates of and , we define the numerical derivative, , for some small , as follows:
function Dconsump_Dtau(g,t,h) opt_plus_h = hh_constrained_opt(g,t+h,array(df[:wage]),array(df[:epsilon])) opt_minus_h = hh_constrained_opt(g,t-h,array(df[:wage]),array(df[:epsilon])) (mean(opt_plus_h[:c_opt]) - mean(opt_minus_h[:c_opt]))/(2*h) end barpsi_MSM = Dconsump_Dtau(gamma_MSM,tau_MSM,.1) -5.016610457903023
Thus, we estimate the policy parameter to be approximately on average, while the true value is , so the econometrician’s problem is successfully solved.
Bradley Setzler
By: Bradley Setzler
Re-posted from: https://juliaeconomics.com/2014/09/02/selection-bias-corrections-in-julia-part-1/
Selection bias arises when a data sample is not a random draw from the population that it is intended to represent. This is especially problematic when the probability that a particular individual appears in the sample depends on variables that also affect the relationships we wish to study. Selection bias corrections based on models of economic behavior were pioneered by the economist James J. Heckman in his seminal 1979 paper.
For an example of selection bias, suppose we wish to study the effectiveness of a treatment (a new medicine for patients with a particular disease, a preschool curriculum for children facing particular disadvantages, etc.). A random sample is drawn from the population of interest, and the treatment is randomly assigned to a subset of this sample, with the remaining subset serving as the untreated (“control”) group. If the subsets followed instructions, then the resulting data would serve as a random draw from the data generating process that we wish to study.
However, suppose the treatment and control groups do not comply with their assignments. In particular, if only some of the treated benefit from treatment while the others are in fact harmed by treatment, then we might expect the harmed individuals to leave the study. If we accepted the resulting data as a random draw from the data generating process, it would appear that the treatment was much more successful than it actually was; an individual who benefits is more likely to be present in the data than one who does not benefit.
Conversely, if treatment were very beneficial, then some individuals in the control group may find a way to obtain treatment, possibly without our knowledge. The benefits received by the control group would make it appear that the treatment was less beneficial than it actually was; the receipt of treatment is no longer random.
In this tutorial, I present some parameterized examples of selection bias. Then, I present examples of parametric selection bias corrections, evaluating their effectiveness in recovering the data generating processes. Along the way, I demonstrate the use of the GLM package in Julia. A future tutorial demonstrates non-parametric selection bias corrections.
Example 1: Selection on a Normally-Distributed Unobservable
Suppose that we wish to study of the effect of the observable variable on . The data generating process is given by,
,
where and are independent in the population. Because of this independence condition, the ordinary least squares estimator would be unbiased if the data were drawn randomly from the population. However, suppose that the probability that individual were in the data set were a function of and . For example, suppose that,
if ,
and the probability is zero otherwise. This selection rule ensures that, among the individuals in the data (the in ), the covariance between and will be positive, even though the covariance is zero in the population. When covaries positively with , then the OLS estimate of is biased upward, i.e., the OLS estimator converges to a value that is greater than .
To see the problem, consider the following simulation of the above process in which and are drawn as independent standard normal random variables:
srand(2) N = 1000 X = randn(N) epsilon = randn(N) beta_0 = 0. beta_1 = 1. Y = beta_0 + beta_1.*X + epsilon populationData = DataFrame(Y=Y,X=X,epsilon=epsilon) selected = X.>epsilon sampleData = DataFrame(Y=Y[selected],X=X[selected],epsilon=epsilon[selected])
There are 1,000 individuals in the population data, but 492 of them are selected to be included in the sample data. The covariance between X and epsilon in the population data is,
cov(populationData[:X],populationData[:epsilon]) 0.00861456704877879
which is approximately zero, but in the sample data, it is,
cov(sampleData[:X],sampleData[:epsilon]) 0.32121357192108513
which is approximately 0.32.
Now, we regress on with the two data sets to obtain,
linreg(array(populationData[:X]),array(populationData[:Y])) 2-element Array{Float64,1}: -0.027204 1.00882 linreg(array(sampleData[:X]),array(sampleData[:Y])) 2-element Array{Float64,1}: -0.858874 1.4517
where, in Julia 0.3.0, the command array() replaces the old command matrix() in converting a DataFrame into a numerical Array. This simulation demonstrates severe selection bias associated with using the sample data to estimate the data generating process instead of the population data, as the true parameters, , are not recovered by the sample estimator.
Correction 1: Heckman (1979)
The key insight of Heckman (1979) is that the correlation between and can be represented as an omitted variable in the OLS moment condition,
,
Furthermore, using the conditional density of the standard Normally distributed ,
.
which is called the inverse Mills ratio, where and are the probability and cumulative density functions of the standard normal distribution. As a result, the moment condition that holds in the sample is,
Returning to our simulation, the inverse Mills ratio is added to the sample data as,
sampleData[:invMills] = -pdf(Normal(),sampleData[:X])./cdf(Normal(),sampleData[:X])
Then, we run the regression corresponding to the sample moment condition,
linreg(array(sampleData[[:X,:invMills]]),array(sampleData[:Y])) 3-element Array{Float64,1}: -0.166541 1.05648 0.827454
We see that the estimate for is now 1.056, which is close to the true value of 1, compared to the non-corrected estimate of 1.452 above. Similarly, the estimate for has improved from -0.859 to -0.167, when the true value is 0. To see that the Heckman (1979) correction is consistent, we can increase the sample size to , which yields the estimates,
linreg(array(sampleData[[:X,:invMills]]),array(sampleData[:Y])) 3-element Array{Float64,1}: -0.00417033 1.00697 0.991503
which are very close to the true parameter values.
Note that this analysis generalizes to the case in which contains variables and the selection rule is,
,
which is the case considered by Heckman (1979). The only difference is that the coefficients must first be estimated by regressing an indicator for on , then using the fitted equation within the inverse Mills ratio. This requires that we observe for . Probit regression is covered in a slightly different context below.
As a matter of terminology, the process of estimating is called the “first stage”, and estimating conditional on the estimates of is called the “second stage”. When the coefficient on the inverse Mills ratio is positive, it is said that “positive selection” has occurred, with “negative selection” otherwise. Positive selection means that, without the correction, the estimate of would have been upward-biased, while negative selection results in a downward-biased estimate. Finally, because the selection rule is driven by an unobservable variables , this is a case of “selection on unobservables”. In the next section we consider a case of “selection on observables”.
Example 2: Probit Selection on Observables
Suppose that we wish to know the mean and variance of in the population. However, our sample of suffers from selection bias. In particular, there is some such that the probability of observing depends on according to,
,
where is some function with range . Notice that, if and were independent, then the resulting sample distribution of would be a random draw from the population (marginal distribution) of . Instead, we suppose . For example,
srand(2) N = 10000 populationData = DataFrame(rand(MvNormal([0,0.],[1 .5;.5 1]),N)') names!(populationData,[:X,:Y]) mean(array(populationData),1) 1x2 Array{Float64,2}: -0.0281916 -0.022319 cov(array(populationData)) 2x2 Array{Float64,2}: 0.98665 0.500912 0.500912 1.00195
In this simulated population, the estimated mean and variance of are -0.022 and 1.002, and the covariance between and is 0.501. Now, suppose the probability that is observed is a probit regression of ,
,
where is the CDF of the standard normal distribution. Letting indicate that , we can generate the sample selection rule as,
beta_0 = 0 beta_1 = 1 index = (beta_0 + beta_1*data[:X]) probability = cdf(Normal(0,1),index) D = zeros(N) for i=1:N D[i] = rand(Bernoulli(probability[i])) end populationData[:D] = D sampleData = populationData sampleData[D.==0,:Y] = NA
The sample data has missing values in place of if . The estimated mean and variance of in the sample data are 0.275 (which is too large) and 0.862 (which is too small).
Correction 2: Inverse Probability Weighting
The reason for the biased estimates of the mean and variance of in Example 2 is sample selection on the observable . In particular, certain values of are over-represented due to their relationship with . Inverse probability weighting is a way to correct for the over-representation of certain types of individuals, where the “type” is captured by the probability of being included in the sample.
In the above simulation, conditional on appearing in the population, the probability that an individual of type is included in the sample is 0.841. By contrast, the probability that an individual of type is included in the sample is 0.5, so type is over-represented by a factor of 0.841/0.5 = 1.682. If we could reduce the impact that type has in the computation of the mean and variance of by a factor of 1.682, we would alter the balance of types in the sample to match the balance of types in the population. Inverse probability weighting generalizes this logic by weighting each individual’s impact by the inverse of the probability that this individual appears in the sample.
Before we can make the correct, we must first estimate the probability of sample inclusion. This can be done by fitting the probit regression above by least-squares. For this, we use the GLM package in Julia, which can be installed the usual way with the command Pkg.add(“GLM”).
using GLM Probit = glm(D ~ X, sampleData, Binomial(), ProbitLink()) DataFrameRegressionModel{GeneralizedLinearModel,Float64}: Coefficients: Estimate Std.Error z value Pr(>|z|) (Intercept) 0.114665 0.148809 0.770554 0.4410 X 1.14826 0.21813 5.26414 1e-6 estProb = predict(Probit) weights = 1./estProb[D.==1]/sum(1./estProb[D.==1])
which are the inverse probability weights needed to match the sample distribution to the population distribution.
Now, we use the inverse probability weights to correct the mean and variance estimates of ,
correctedMean = sum(sampleData[D.==1,:Y].*weight) -0.024566923132025013 correctedVariance = (N/(N-1))*sum((sampleData[D.==1,:Y]-correctedMean).^2.*weight) 1.0094029613131092
which are very close to the population values of -0.022319 and 1.00195. The logic here extends to the case of multivariate , as more coefficients are added to the Probit regression. The logic also extends to other functional forms of , for example, switching from Probit to Logit is achieved by replacing the ProbitLink() with LogitLink() in the glm() estimation above.
Example 3: Generalized Roy Model
For the final example of this tutorial, we consider a model which allows for rich, realistic economic behavior. In words, the Generalized Roy Model is the economic representation of a world in which each individual must choose between two options, where each option has its own benefits, and one of the options costs more than the other. In math notation, the first alternative, denoted , relates the outcomes to the individual’s observable characteristics, , by,
.
Similarly, the second alternative, denoted , relates to , by,
.
The value of that appears in our sample is thus given by,
.
Finally, the value of is chosen by individual according to,
if ,
where is the cost of choosing the alternative and is given by,
,
where contains additional characteristics of that are not included in .
We assume that the data only contains ; it does not contain the variables or the functions . Assuming that the three functions follow the linear form and that the unobservables are independent and Normally distributed, we can simulate the data generating process as,
srand(2) N = 1000 sampleData = DataFrame(rand(MvNormal([0,0.],[1 .5; .5 1]),N)') names!(sampleData,[:X,:Z]) U1 = rand(Normal(0,.5),N) U0 = rand(Normal(0,.7),N) UC = rand(Normal(0,.9),N) betas1 = [0,1] betas0 = [.3,.2] betasC = [.1,.1,.1] Y1 = betas1[1] + betas1[2].*sampleData[:X] + U1 Y0 = betas0[1] + betas0[2].*sampleData[:X] + U0 C = betasC[1] + betasC[2].*sampleData[:X] + betasC[3].*sampleData[:Z] + UC D = Y1-Y0-C.>0 Y = D.*Y1 + (1-D).*Y0 sampleData[:D] = D sampleData[:Y] = Y
In this simulation, about 38% of individuals choose the alternative . About 10% of individuals choose even though they receive greater benefits under due to the high cost associated with .
Solution 3: Heckman Correction for Generalized Roy Model
The identification of this model is attributable to Heckman and Honore (1990). Estimation proceeds in steps. The first step is to notice that the left- and right-hand terms in the following moment equation motivate a Probit regression:
,
where is the negative of the total error term arising in the equation that determines above, , and,
,
In the simulation above, . We can estimate from the Probit regression of on and .
betasD = coef(glm(D~X+Z,sampleData,Binomial(),ProbitLink())) 3-element Array{Float64,1}: -0.299096 0.59392 -0.103155
Next, notice that,
,
where,
,
which is the inverse Mills ratio again, where . Substituting in the estimate for , we consistently estimate :
fittedVals = hcat(ones(N),array(sampleData[[:X,:Z]]))*betasD sampleData[:invMills1] = -pdf(Normal(0,1),fittedVals)./cdf(Normal(0,1),fittedVals) correctedBetas1 = linreg(array(sampleData[D,[:X,:invMills1]]),vec(array(sampleData[D,[:Y]]))) 3-element Array{Float64,1}: 0.0653299 0.973568 -0.135445
To see how well the correction has performed, compare these estimates to the uncorrected estimates of ,
biasedBetas1 = linreg(array(sampleData[D,[:X]]),vec(array(sampleData[D,[:Y]]))) 2-element Array{Float64,1}: 0.202169 0.927317
Similar logic allows us to estimate :
sampleData[:invMills0] = pdf(Normal(0,1),fittedVals)./(1-cdf(Normal(0,1),fittedVals)) correctedBetas0 = linreg(array(sampleData[D.==0,[:X,:invMills0]]),vec(array(sampleData[D.==0,[:Y]]))) 3-element Array{Float64,1}: 0.340621 0.207068 0.323793 biasedBetas0 = linreg(array(sampleData[D.==0,[:X]]),vec(array(sampleData[D.==0,[:Y]]))) 2-element Array{Float64,1}: 0.548451 0.295698
In summary, we can consistently estimate the benefits associated with each of two alternative choices, even though we only observe each individual in one of the alternatives, subject to heavy selection bias, by extending the logic introduced by Heckman (1979).
Bradley J. Setzler